mirror of
https://github.com/browser-use/browser-use
synced 2026-05-06 17:52:15 +02:00
131 lines
3.4 KiB
Plaintext
131 lines
3.4 KiB
Plaintext
---
|
|
title: "OpenLIT"
|
|
description: "Complete observability for Browser Use with OpenLIT tracing"
|
|
icon: "chart-line"
|
|
mode: "wide"
|
|
---
|
|
|
|
## Overview
|
|
|
|
Browser Use has native integration with [OpenLIT](https://github.com/openlit/openlit) - an open-source opentelemetry-native platform that provides complete, granular traces for every task your browser-use agent performs—from high-level agent invocations down to individual browser actions.
|
|
|
|
Read more about OpenLIT in the [OpenLIT docs](https://docs.openlit.io).
|
|
|
|
## Setup
|
|
|
|
Install OpenLIT alongside Browser Use:
|
|
|
|
```bash
|
|
pip install openlit browser-use
|
|
```
|
|
|
|
## Usage
|
|
|
|
OpenLIT provides automatic, comprehensive instrumentation with **zero code changes** beyond initialization:
|
|
|
|
```python {5-6}
|
|
from browser_use import Agent, Browser, ChatOpenAI
|
|
import asyncio
|
|
import openlit
|
|
|
|
# Initialize OpenLIT - that's it!
|
|
openlit.init()
|
|
|
|
async def main():
|
|
browser = Browser()
|
|
|
|
llm = ChatOpenAI(
|
|
model="gpt-4o",
|
|
)
|
|
|
|
agent = Agent(
|
|
task="Find the number trending post on Hacker news",
|
|
llm=llm,
|
|
browser=browser,
|
|
)
|
|
|
|
history = await agent.run()
|
|
return history
|
|
|
|
if __name__ == "__main__":
|
|
history = asyncio.run(main())
|
|
```
|
|
|
|
## Viewing Traces
|
|
|
|
OpenLIT provides a powerful dashboard where you can:
|
|
|
|
### Monitor Execution Flows
|
|
See the complete execution tree with timing information for every span. Click on any `invoke_model` span to see the exact prompt sent to the LLM and the complete response with agent reasoning.
|
|
|
|
### Track Costs and Token Usage
|
|
- Cost breakdown by agent, task, and model
|
|
- Token usage per LLM call with full input/output visibility
|
|
- Compare costs across different LLM providers
|
|
- Identify expensive prompts and optimize them
|
|
|
|
### Debug Failures with Agent Thoughts
|
|
When an automation fails, you can:
|
|
- See exactly which step failed
|
|
- Read the agent's thinking at the failure point
|
|
- Check the browser state and available elements
|
|
- Analyze whether the failure was due to bad reasoning or bad information
|
|
- Fix the root cause with full context
|
|
|
|
### Performance Optimization
|
|
- Identify slow steps (LLM calls vs browser actions vs HTTP requests)
|
|
- Compare execution times across runs
|
|
- Optimize max_steps and max_actions_per_step
|
|
- Track HTTP request latency for page navigations
|
|
|
|
## Configuration
|
|
|
|
### Custom OpenTelemetry Endpoint Configuration
|
|
|
|
```python
|
|
import openlit
|
|
|
|
# Configure custom OTLP endpoints
|
|
openlit.init(
|
|
otlp_endpoint="http://localhost:4318",
|
|
application_name="my-browser-automation",
|
|
environment="production"
|
|
)
|
|
```
|
|
|
|
### Environment Variables
|
|
|
|
You can also configure OpenLIT via environment variables:
|
|
|
|
```bash
|
|
export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4318"
|
|
export OTEL_SERVICE_NAME="browser-automation"
|
|
export OTEL_ENVIRONMENT="production"
|
|
```
|
|
|
|
### Self-Hosted OpenLIT
|
|
|
|
If you prefer to keep your data on-premises:
|
|
|
|
```bash
|
|
# Using Docker
|
|
docker run -d \
|
|
-p 4318:4318 \
|
|
-p 3000:3000 \
|
|
openlit/openlit:latest
|
|
|
|
# Access dashboard at http://localhost:3000
|
|
```
|
|
|
|
## Integration with Existing Tools
|
|
|
|
OpenLIT uses OpenTelemetry under the hood, so it integrates seamlessly with:
|
|
- **Jaeger** - Distributed tracing visualization
|
|
- **Prometheus** - Metrics collection and alerting
|
|
- **Grafana** - Custom dashboards and analytics
|
|
- **Datadog** - APM and log management
|
|
- **New Relic** - Full-stack observability
|
|
- **Elastic APM** - Application performance monitoring
|
|
|
|
Simply configure OpenLIT to export to your existing OTLP-compatible endpoint.
|