Files
browser-use/browser_use/llm
Magnus Müller 7fb8e5b39b feat: implement SDK-native rate limiting for improved reliability
- Increase max_retries to 10 for all LLM providers (was 2-3)
- OpenAI: Use built-in retry with exponential backoff, jitter, Retry-After header support
- Anthropic: Use built-in retry with intelligent error classification
- Groq: Use built-in retry with capacity exceeded handling (498 errors)
- Google: Implement custom retry logic with pattern-based error detection
- Azure: Inherits improved retry from OpenAI base class

Benefits:
- More resilient to temporary API issues during automation
- Leverages provider SDKs' optimized retry logic
- Reduces custom retry code maintenance burden
- Automatically handles provider-specific error patterns

All providers now retry rate limits, server errors, and connection issues
while skipping authentication errors and bad requests.
2025-06-26 18:08:51 +02:00
..
2025-06-24 12:26:55 +02:00
2025-06-24 12:26:55 +02:00