--- title: "Sensitive Data" description: "Handle sensitive information securely and avoid sending PII & passwords to the LLM." icon: "shield" mode: "wide" --- ```python import os from browser_use import Agent, Browser, ChatOpenAI os.environ['ANONYMIZED_TELEMETRY'] = "false" agent = Agent( task='Log into example.com with username x_user and password x_pass', sensitive_data={ 'https://example.com': { 'x_user': 'your-real-username@email.com', 'x_pass': 'your-real-password123', }, }, use_vision=False, # Disable vision to prevent LLM seeing sensitive data in screenshots llm=ChatOpenAI(model='gpt-4.1-mini'), ) async def main(): await agent.run() ``` ## How it Works 1. **Text Filtering**: The LLM only sees placeholders (`x_user`, `x_pass`), we filter your sensitive data from the input text. 2. **DOM Actions**: Real values are injected directly into form fields after the LLM call ## Best Practices - Use `Browser(allowed_domains=[...])` to restrict navigation - Set `use_vision=False` to prevent screenshot leaks - Use `storage_state='./auth.json'` for login cookies instead of passwords when possible