mirror of
https://github.com/browser-use/browser-use
synced 2026-05-06 17:52:15 +02:00
- Increase max_retries to 10 for all LLM providers (was 2-3) - OpenAI: Use built-in retry with exponential backoff, jitter, Retry-After header support - Anthropic: Use built-in retry with intelligent error classification - Groq: Use built-in retry with capacity exceeded handling (498 errors) - Google: Implement custom retry logic with pattern-based error detection - Azure: Inherits improved retry from OpenAI base class Benefits: - More resilient to temporary API issues during automation - Leverages provider SDKs' optimized retry logic - Reduces custom retry code maintenance burden - Automatically handles provider-specific error patterns All providers now retry rate limits, server errors, and connection issues while skipping authentication errors and bad requests.