Compare commits

...

49 Commits

Author SHA1 Message Date
Paul Gauthier
1961543e2f set version to 0.81.3.dev 2025-04-11 08:37:59 +12:00
Paul Gauthier
b4f65734a5 version bump to 0.81.2 2025-04-11 08:37:56 +12:00
Paul Gauthier (aider)
0eb80553f6 style: Apply linter to versionbump.py 2025-04-11 08:37:37 +12:00
Paul Gauthier (aider)
110c63ae95 feat: Add --force flag to skip pre-push checks 2025-04-11 08:37:33 +12:00
Paul Gauthier
57304536bf copy 2025-04-11 08:23:16 +12:00
Paul Gauthier
a9ca5da139 feat: add grok-3-mini-beta to model settings with reasoning_effort 2025-04-11 08:22:44 +12:00
Paul Gauthier
947aebfbe0 copy 2025-04-11 08:13:15 +12:00
Paul Gauthier
fafc9268d4 feat: set grok-3-mini-beta edit_format to whole 2025-04-11 08:12:38 +12:00
Paul Gauthier
65a5d55436 feat: add grok-3-beta and grok-3-mini-beta model settings 2025-04-11 08:10:18 +12:00
Paul Gauthier (aider)
96b350400f feat: make highlight parameter case-insensitive in leaderboard 2025-04-11 07:28:01 +12:00
Paul Gauthier
7983b4caf2 copy 2025-04-11 07:25:10 +12:00
Paul Gauthier (aider)
e44122f1be fix: correct groq to grok typo in model settings yaml 2025-04-11 07:23:16 +12:00
Paul Gauthier (aider)
42618d7ec6 fix: correct groq to grok typos in model names and aliases 2025-04-11 07:23:05 +12:00
Paul Gauthier
1d0167bbf4 feat: add polyglot leaderboard entries for grok3 and optimus alpha 2025-04-11 07:17:56 +12:00
Paul Gauthier (aider)
43d4b21b23 feat: add "optimus" alias for openrouter model 2025-04-11 07:12:27 +12:00
Paul Gauthier (aider)
562171c548 feat: add grok3 alias for xai/groq-3-beta 2025-04-11 07:12:09 +12:00
Paul Gauthier (aider)
8dccecdd9f feat: add xai/groq-3-beta and xai/groq-3-mini-beta models 2025-04-11 07:10:23 +12:00
Paul Gauthier
940ae364d7 feat: add model settings for grok-3-beta, grok-3-mini-beta, optimus-alpha 2025-04-11 07:09:27 +12:00
Paul Gauthier (aider)
532bc454c5 feat: add openrouter/openrouter/optimus-alpha model metadata 2025-04-11 06:54:12 +12:00
Paul Gauthier (aider)
14ffe7782c feat: add openrouter/x-ai/grok-3-mini-beta model metadata 2025-04-11 06:37:33 +12:00
Paul Gauthier (aider)
2dd40fce44 feat: add openrouter/x-ai/grok-3-beta model metadata 2025-04-10 16:18:22 +12:00
Paul Gauthier (aider)
0c8bc46e28 fix: strip trailing } from urls extracted from error messages 2025-04-10 16:06:07 +12:00
Paul Gauthier
7d0dd29937 Merge branch 'main' of github.com:Aider-AI/aider 2025-04-08 18:33:41 +12:00
Paul Gauthier
349cd77821 copy 2025-04-08 08:10:21 +12:00
paul-gauthier
dc2d7b1dfe Merge pull request #3752 from tylersatre/main 2025-04-08 06:31:41 +12:00
Tyler Satre
be30329288 Update azure documentation 2025-04-07 11:04:15 -04:00
Paul Gauthier
71446d9f3c fix: get_file_mentions skips pure basenames with duplicates 2025-04-07 13:22:05 +12:00
Paul Gauthier (aider)
c9d4c8d09b fix: allow adding files by full path with existing basename 2025-04-07 13:19:25 +12:00
Paul Gauthier (aider)
c580ffdb70 test: add test for multiline backtick file mentions 2025-04-07 13:14:27 +12:00
Paul Gauthier
f46deb4eb7 improve diff-fenced prompts 2025-04-07 08:54:01 +12:00
Paul Gauthier
b3215bed48 copy 2025-04-07 08:13:22 +12:00
Paul Gauthier
2a9ab02753 chore: update polyglot leaderboard data 2025-04-07 08:12:37 +12:00
Paul Gauthier (aider)
0da586154d fix: quote values with '#' in sample aider.conf.yml config 2025-04-07 08:09:33 +12:00
Paul Gauthier
26d736551d Merge branch 'main' of github.com:Aider-AI/aider 2025-04-07 08:05:15 +12:00
paul-gauthier
9445a3118b Merge pull request #3733 from banjo/main 2025-04-06 07:28:56 +12:00
paul-gauthier
a2c46c7436 Merge pull request #3735 from KennyDizi/main 2025-04-06 07:20:22 +12:00
paul-gauthier
8df7a0960e Merge pull request #3736 from FelixLisczyk/tl-173 2025-04-06 07:19:30 +12:00
Felix Lisczyk
e7f35e7a35 Add Fireworks AI model 'deepseek-v3-0324' 2025-04-05 16:33:09 +02:00
Kenny Dizi
088e80e38b Add support model openrouter/google/gemini-2.5-pro-preview-03-25 2025-04-05 20:53:06 +07:00
Kenny Dizi
2d65c7f387 Remove trailing spaces 2025-04-05 20:51:11 +07:00
Anton Ödman
94db758eb7 fix: follow conventional commits examples by going all lowercase 2025-04-05 13:07:34 +02:00
Paul Gauthier
2bfb615d68 set version to 0.81.2.dev 2025-04-05 09:01:12 +13:00
Paul Gauthier
87275140f9 version bump to 0.81.1 2025-04-05 09:01:09 +13:00
Paul Gauthier
0672a68ba4 copy 2025-04-05 08:58:10 +13:00
Paul Gauthier (aider)
246e3ccfad refactor: Rename gemini-free alias to gemini-exp in MODEL_ALIASES 2025-04-05 08:56:56 +13:00
Paul Gauthier (aider)
b275ee919f feat: Update gemini model alias and add gemini-free alias 2025-04-05 08:56:35 +13:00
Paul Gauthier (aider)
eda796d5e0 feat: Add metadata and settings for gemini-2.5-pro-preview-03-25 2025-04-05 08:54:45 +13:00
Paul Gauthier
d1b3917309 copy 2025-04-04 22:08:49 +13:00
Paul Gauthier
ffee2b971f blame 2025-04-04 22:08:08 +13:00
27 changed files with 1326 additions and 767 deletions

View File

@@ -1,5 +1,24 @@
# Release history
### Aider v0.81.2
- Add support for `xai/grok-3-beta`, `xai/grok-3-mini-beta`, `openrouter/x-ai/grok-3-beta`, `openrouter/x-ai/grok-3-mini-beta`, and `openrouter/openrouter/optimus-alpha` models.
- Add alias "grok3" for `xai/grok-3-beta`.
- Add alias "optimus" for `openrouter/openrouter/optimus-alpha`.
- Fix URL extraction from error messages.
- Allow adding files by full path even if a file with the same basename is already in the chat.
- Fix quoting of values containing '#' in the sample `aider.conf.yml`.
- Add support for Fireworks AI model 'deepseek-v3-0324', by Felix Lisczyk.
- Commit messages generated by aider are now lowercase, by Anton Ödman.
- Aider wrote 64% of the code in this release.
### Aider v0.81.1
- Added support for the `gemini/gemini-2.5-pro-preview-03-25` model.
- Updated the `gemini` alias to point to `gemini/gemini-2.5-pro-preview-03-25`.
- Added the `gemini-exp` alias for `gemini/gemini-2.5-pro-exp-03-25`.
- Aider wrote 87% of the code in this release.
### Aider v0.81.0
- Added support for the `openrouter/openrouter/quasar-alpha` model.

View File

@@ -27,13 +27,13 @@ cog.out(text)
<a href="https://github.com/Aider-AI/aider/stargazers"><img alt="GitHub Stars" title="Total number of GitHub stars the Aider project has received"
src="https://img.shields.io/github/stars/Aider-AI/aider?style=flat-square&logo=github&color=f1c40f&labelColor=555555"/></a>
<a href="https://pypi.org/project/aider-chat/"><img alt="PyPI Downloads" title="Total number of installations via pip from PyPI"
src="https://img.shields.io/badge/📦%20Installs-1.8M-2ecc71?style=flat-square&labelColor=555555"/></a>
src="https://img.shields.io/badge/📦%20Installs-1.9M-2ecc71?style=flat-square&labelColor=555555"/></a>
<img alt="Tokens per week" title="Number of tokens processed weekly by Aider users"
src="https://img.shields.io/badge/📈%20Tokens%2Fweek-15B-3498db?style=flat-square&labelColor=555555"/>
<a href="https://openrouter.ai/"><img alt="OpenRouter Ranking" title="Aider's ranking among applications on the OpenRouter platform"
src="https://img.shields.io/badge/🏆%20OpenRouter-Top%2020-9b59b6?style=flat-square&labelColor=555555"/></a>
<a href="https://aider.chat/HISTORY.html"><img alt="Singularity" title="Percentage of the new code in Aider's last release written by Aider itself"
src="https://img.shields.io/badge/🔄%20Singularity-87%25-e74c3c?style=flat-square&labelColor=555555"/></a>
src="https://img.shields.io/badge/🔄%20Singularity-86%25-e74c3c?style=flat-square&labelColor=555555"/></a>
<!--[[[end]]]-->
</p>

View File

@@ -1,6 +1,6 @@
from packaging import version
__version__ = "0.81.1.dev"
__version__ = "0.81.3.dev"
safe_version = __version__
try:

View File

@@ -143,7 +143,10 @@ class YamlHelpFormatter(argparse.HelpFormatter):
default = "true"
if default:
parts.append(f"#{switch}: {default}\n")
if "#" in default:
parts.append(f'#{switch}: "{default}"\n')
else:
parts.append(f"#{switch}: {default}\n")
elif action.nargs in ("*", "+") or isinstance(action, argparse._AppendAction):
parts.append(f"#{switch}: xxx")
parts.append("## Specify multiple values like this:")

View File

@@ -926,7 +926,7 @@ class Coder:
url_pattern = re.compile(r'(https?://[^\s/$.?#].[^\s"]*)')
urls = list(set(url_pattern.findall(text))) # Use set to remove duplicates
for url in urls:
url = url.rstrip(".',\"")
url = url.rstrip(".',\"}") # Added } to the characters to strip
self.io.offer_url(url)
return urls
@@ -1626,10 +1626,6 @@ class Coder:
mentioned_rel_fnames = set()
fname_to_rel_fnames = {}
for rel_fname in addable_rel_fnames:
# Skip files that share a basename with files already in chat
if os.path.basename(rel_fname) in existing_basenames:
continue
normalized_rel_fname = rel_fname.replace("\\", "/")
normalized_words = set(word.replace("\\", "/") for word in words)
if normalized_rel_fname in normalized_words:
@@ -1644,6 +1640,10 @@ class Coder:
fname_to_rel_fnames[fname].append(rel_fname)
for fname, rel_fnames in fname_to_rel_fnames.items():
# If the basename is already in chat, don't add based on a basename mention
if fname in existing_basenames:
continue
# If the basename mention is unique among addable files and present in the text
if len(rel_fnames) == 1 and fname in words:
mentioned_rel_fnames.add(rel_fnames[0])

View File

@@ -19,7 +19,7 @@ class EditBlockFencedPrompts(EditBlockPrompts):
Here are the *SEARCH/REPLACE* blocks:
{fence[0]}
{fence[0]}python
mathweb/flask/app.py
<<<<<<< SEARCH
from flask import Flask
@@ -29,7 +29,7 @@ from flask import Flask
>>>>>>> REPLACE
{fence[1]}
{fence[0]}
{fence[0]}python
mathweb/flask/app.py
<<<<<<< SEARCH
def factorial(n):
@@ -44,7 +44,7 @@ def factorial(n):
>>>>>>> REPLACE
{fence[1]}
{fence[0]}
{fence[0]}python
mathweb/flask/app.py
<<<<<<< SEARCH
return str(factorial(n))
@@ -68,7 +68,7 @@ mathweb/flask/app.py
Here are the *SEARCH/REPLACE* blocks:
{fence[0]}
{fence[0]}python
hello.py
<<<<<<< SEARCH
=======
@@ -79,7 +79,7 @@ def hello():
>>>>>>> REPLACE
{fence[1]}
{fence[0]}
{fence[0]}python
main.py
<<<<<<< SEARCH
def hello():
@@ -93,3 +93,50 @@ from hello import hello
""",
),
]
system_reminder = """# *SEARCH/REPLACE block* Rules:
Every *SEARCH/REPLACE block* must use this format:
1. The opening fence and code language, eg: {fence[0]}python
2. The *FULL* file path alone on a line, verbatim. No bold asterisks, no quotes around it, no escaping of characters, etc.
3. The start of search block: <<<<<<< SEARCH
4. A contiguous chunk of lines to search for in the existing source code
5. The dividing line: =======
6. The lines to replace into the source code
7. The end of the replace block: >>>>>>> REPLACE
8. The closing fence: {fence[1]}
Use the *FULL* file path, as shown to you by the user.
{quad_backtick_reminder}
Every *SEARCH* section must *EXACTLY MATCH* the existing file content, character for character, including all comments, docstrings, etc.
If the file contains code or other data wrapped/escaped in json/xml/quotes or other containers, you need to propose edits to the literal contents of the file, including the container markup.
*SEARCH/REPLACE* blocks will *only* replace the first match occurrence.
Including multiple unique *SEARCH/REPLACE* blocks if needed.
Include enough lines in each SEARCH section to uniquely match each set of lines that need to change.
Keep *SEARCH/REPLACE* blocks concise.
Break large *SEARCH/REPLACE* blocks into a series of smaller blocks that each change a small portion of the file.
Include just the changing lines, and a few surrounding lines if needed for uniqueness.
Do not include long runs of unchanging lines in *SEARCH/REPLACE* blocks.
Only create *SEARCH/REPLACE* blocks for files that the user has added to the chat!
To move code within a file, use 2 *SEARCH/REPLACE* blocks: 1 to delete it from its current location, 1 to insert it in the new location.
Pay attention to which filenames the user wants you to edit, especially if they are asking you to create a new file.
If you want to put code in a new file, use a *SEARCH/REPLACE block* with:
- A new file path, including dir name if needed
- An empty `SEARCH` section
- The new file's contents in the `REPLACE` section
To rename files which have been added to the chat, use shell commands at the end of your response.
If the user just says something like "ok" or "go ahead" or "do that" they probably want you to make SEARCH/REPLACE blocks for the code changes you just proposed.
The user will say when they've applied your edits. If they haven't explicitly confirmed the edits have been applied, they probably want proper SEARCH/REPLACE blocks.
{lazy_prompt}
ONLY EVER RETURN CODE IN A *SEARCH/REPLACE BLOCK*!
{shell_cmd_reminder}
"""

View File

@@ -92,7 +92,10 @@ MODEL_ALIASES = {
"quasar": "openrouter/openrouter/quasar-alpha",
"r1": "deepseek/deepseek-reasoner",
"gemini-2.5-pro": "gemini/gemini-2.5-pro-exp-03-25",
"gemini": "gemini/gemini-2.5-pro-exp-03-25",
"gemini": "gemini/gemini-2.5-pro-preview-03-25",
"gemini-exp": "gemini/gemini-2.5-pro-exp-03-25",
"grok3": "xai/grok-3-beta",
"optimus": "openrouter/openrouter/optimus-alpha",
}
# Model metadata loaded from resources and user's files.

View File

@@ -10,12 +10,13 @@ one-line Git commit messages based on the provided diffs.
Review the provided context and diffs which are about to be committed to a git repo.
Review the diffs carefully.
Generate a one-line commit message for those changes.
The commit message should be entirely lowercase.
The commit message should be structured as follows: <type>: <description>
Use these for <type>: fix, feat, build, chore, ci, docs, style, refactor, perf, test
Ensure the commit message:
- Starts with the appropriate prefix.
- Is in the imperative mood (e.g., \"Add feature\" not \"Added feature\" or \"Adding feature\").
- Is in the imperative mood (e.g., \"add feature\" not \"added feature\" or \"adding feature\").
- Does not exceed 72 characters.
Reply only with the one-line commit message, without any additional text, explanations, \

View File

@@ -108,6 +108,15 @@
"output_cost_per_token": 0.0000009,
"mode": "chat",
},
"fireworks_ai/accounts/fireworks/models/deepseek-v3-0324": {
"max_tokens": 160000,
"max_input_tokens": 100000,
"max_output_tokens": 8192,
"litellm_provider": "fireworks_ai",
"input_cost_per_token": 0.0000009,
"output_cost_per_token": 0.0000009,
"mode": "chat",
},
"o3-mini": {
"max_tokens": 100000,
"max_input_tokens": 200000,
@@ -168,6 +177,14 @@
"supports_system_messages": true,
"supports_prompt_caching": true
},
"openrouter/openrouter/optimus-alpha": {
"max_input_tokens": 1000000,
"max_output_tokens": 32000,
"input_cost_per_token": 0.0,
"output_cost_per_token": 0.0,
"litellm_provider": "openrouter",
"mode": "chat"
},
"openrouter/openai/gpt-4o-mini": {
"max_tokens": 16384,
"max_input_tokens": 128000,
@@ -317,6 +334,42 @@
"supports_tool_choice": true,
"source": "https://cloud.google.com/vertex-ai/generative-ai/pricing"
},
"gemini/gemini-2.5-pro-preview-03-25": {
"max_tokens": 8192,
"max_input_tokens": 1048576,
"max_output_tokens": 64000,
"max_images_per_prompt": 3000,
"max_videos_per_prompt": 10,
"max_video_length": 1,
"max_audio_length_hours": 8.4,
"max_audio_per_prompt": 1,
"max_pdf_size_mb": 30,
"input_cost_per_image": 0,
"input_cost_per_video_per_second": 0,
"input_cost_per_audio_per_second": 0,
"input_cost_per_token": 0.00000125,
"input_cost_per_character": 0,
"input_cost_per_token_above_128k_tokens": 0,
"input_cost_per_character_above_128k_tokens": 0,
"input_cost_per_image_above_128k_tokens": 0,
"input_cost_per_video_per_second_above_128k_tokens": 0,
"input_cost_per_audio_per_second_above_128k_tokens": 0,
"output_cost_per_token": 0.000010,
"output_cost_per_character": 0,
"output_cost_per_token_above_128k_tokens": 0,
"output_cost_per_character_above_128k_tokens": 0,
"litellm_provider": "gemini",
"mode": "chat",
"supports_system_messages": true,
"supports_function_calling": true,
"supports_vision": true,
"supports_audio_input": true,
"supports_video_input": true,
"supports_pdf_input": true,
"supports_response_schema": true,
"supports_tool_choice": true,
"source": "https://cloud.google.com/vertex-ai/generative-ai/pricing"
},
"vertex_ai/gemini-2.5-pro-exp-03-25": {
"max_tokens": 8192,
"max_input_tokens": 1048576,
@@ -353,6 +406,78 @@
"supports_tool_choice": true,
"source": "https://cloud.google.com/vertex-ai/generative-ai/pricing"
},
"vertex_ai/gemini-2.5-pro-preview-03-25": {
"max_tokens": 8192,
"max_input_tokens": 1048576,
"max_output_tokens": 64000,
"max_images_per_prompt": 3000,
"max_videos_per_prompt": 10,
"max_video_length": 1,
"max_audio_length_hours": 8.4,
"max_audio_per_prompt": 1,
"max_pdf_size_mb": 30,
"input_cost_per_image": 0,
"input_cost_per_video_per_second": 0,
"input_cost_per_audio_per_second": 0,
"input_cost_per_token": 0.00000125,
"input_cost_per_character": 0,
"input_cost_per_token_above_128k_tokens": 0,
"input_cost_per_character_above_128k_tokens": 0,
"input_cost_per_image_above_128k_tokens": 0,
"input_cost_per_video_per_second_above_128k_tokens": 0,
"input_cost_per_audio_per_second_above_128k_tokens": 0,
"output_cost_per_token": 0.000010,
"output_cost_per_character": 0,
"output_cost_per_token_above_128k_tokens": 0,
"output_cost_per_character_above_128k_tokens": 0,
"litellm_provider": "vertex_ai-language-models",
"mode": "chat",
"supports_system_messages": true,
"supports_function_calling": true,
"supports_vision": true,
"supports_audio_input": true,
"supports_video_input": true,
"supports_pdf_input": true,
"supports_response_schema": true,
"supports_tool_choice": true,
"source": "https://cloud.google.com/vertex-ai/generative-ai/pricing"
},
"openrouter/google/gemini-2.5-pro-preview-03-25": {
"max_tokens": 8192,
"max_input_tokens": 1048576,
"max_output_tokens": 64000,
"max_images_per_prompt": 3000,
"max_videos_per_prompt": 10,
"max_video_length": 1,
"max_audio_length_hours": 8.4,
"max_audio_per_prompt": 1,
"max_pdf_size_mb": 30,
"input_cost_per_image": 0,
"input_cost_per_video_per_second": 0,
"input_cost_per_audio_per_second": 0,
"input_cost_per_token": 0.00000125,
"input_cost_per_character": 0,
"input_cost_per_token_above_128k_tokens": 0,
"input_cost_per_character_above_128k_tokens": 0,
"input_cost_per_image_above_128k_tokens": 0,
"input_cost_per_video_per_second_above_128k_tokens": 0,
"input_cost_per_audio_per_second_above_128k_tokens": 0,
"output_cost_per_token": 0.000010,
"output_cost_per_character": 0,
"output_cost_per_token_above_128k_tokens": 0,
"output_cost_per_character_above_128k_tokens": 0,
"litellm_provider": "vertex_ai-language-models",
"mode": "chat",
"supports_system_messages": true,
"supports_function_calling": true,
"supports_vision": true,
"supports_audio_input": true,
"supports_video_input": true,
"supports_pdf_input": true,
"supports_response_schema": true,
"supports_tool_choice": true,
"source": "https://cloud.google.com/vertex-ai/generative-ai/pricing"
},
"openrouter/google/gemini-2.5-pro-exp-03-25:free": {
"max_tokens": 8192,
"max_input_tokens": 1048576,
@@ -367,9 +492,9 @@
"input_cost_per_video_per_second": 0,
"input_cost_per_audio_per_second": 0,
"input_cost_per_token": 0,
"input_cost_per_character": 0,
"input_cost_per_token_above_128k_tokens": 0,
"input_cost_per_character_above_128k_tokens": 0,
"input_cost_per_character": 0,
"input_cost_per_token_above_128k_tokens": 0,
"input_cost_per_character_above_128k_tokens": 0,
"input_cost_per_image_above_128k_tokens": 0,
"input_cost_per_video_per_second_above_128k_tokens": 0,
"input_cost_per_audio_per_second_above_128k_tokens": 0,
@@ -389,7 +514,43 @@
"supports_tool_choice": true,
"source": "https://cloud.google.com/vertex-ai/generative-ai/pricing"
},
"openrouter/google/gemini-2.0-flash-exp:free": {
"openrouter/x-ai/grok-3-beta": {
"max_tokens": 131072,
"max_input_tokens": 131072,
"max_output_tokens": 131072,
"input_cost_per_token": 0.000003,
"output_cost_per_token": 0.000015,
"litellm_provider": "openrouter",
"mode": "chat"
},
"xai/grok-3-beta": {
"max_tokens": 131072,
"max_input_tokens": 131072,
"max_output_tokens": 131072,
"input_cost_per_token": 0.000003,
"output_cost_per_token": 0.000015,
"litellm_provider": "xai",
"mode": "chat"
},
"openrouter/x-ai/grok-3-mini-beta": {
"max_tokens": 131072,
"max_input_tokens": 131072,
"max_output_tokens": 131072,
"input_cost_per_token": 0.0000003,
"output_cost_per_token": 0.0000005,
"litellm_provider": "openrouter",
"mode": "chat"
},
"xai/grok-3-mini-beta": {
"max_tokens": 131072,
"max_input_tokens": 131072,
"max_output_tokens": 131072,
"input_cost_per_token": 0.0000003,
"output_cost_per_token": 0.0000005,
"litellm_provider": "xai",
"mode": "chat"
},
"openrouter/google/gemini-2.0-flash-exp:free": {
"max_tokens": 8192,
"max_input_tokens": 1048576,
"max_output_tokens": 8192,

View File

@@ -817,7 +817,7 @@
use_temperature: false
editor_model_name: openrouter/deepseek/deepseek-chat
editor_edit_format: editor-diff
- name: fireworks_ai/accounts/fireworks/models/deepseek-r1
edit_format: diff
weak_model_name: fireworks_ai/accounts/fireworks/models/deepseek-v3
@@ -838,6 +838,14 @@
extra_params:
max_tokens: 128000
- name: fireworks_ai/accounts/fireworks/models/deepseek-v3-0324
edit_format: diff
use_repo_map: true
reminder: sys
examples_as_sys_msg: true
extra_params:
max_tokens: 160000
- name: openai/o3-mini
edit_format: diff
weak_model_name: gpt-4o-mini
@@ -847,7 +855,7 @@
editor_edit_format: editor-diff
system_prompt_prefix: "Formatting re-enabled. "
accepts_settings: ["reasoning_effort"]
- name: o3-mini
edit_format: diff
weak_model_name: gpt-4o-mini
@@ -897,7 +905,7 @@
examples_as_sys_msg: true
editor_model_name: gpt-4o
editor_edit_format: editor-diff
- name: openai/gpt-4.5-preview
edit_format: diff
weak_model_name: gpt-4o-mini
@@ -942,13 +950,18 @@
- name: gemini/gemma-3-27b-it
use_system_prompt: false
- name: openrouter/google/gemma-3-27b-it:free
use_system_prompt: false
- name: openrouter/google/gemma-3-27b-it
use_system_prompt: false
- name: gemini/gemini-2.5-pro-preview-03-25
edit_format: diff-fenced
use_repo_map: true
weak_model_name: gemini/gemini-2.0-flash
- name: gemini/gemini-2.5-pro-exp-03-25
edit_format: diff-fenced
use_repo_map: true
@@ -965,7 +978,41 @@
# Need metadata for this one...
#weak_model_name: vertex_ai/gemini-2.0-flash
- name: vertex_ai/gemini-2.5-pro-preview-03-25
edit_format: diff-fenced
use_repo_map: true
# Need metadata for this one...
#weak_model_name: vertex_ai/gemini-2.0-flash
- name: openrouter/openrouter/quasar-alpha
use_repo_map: true
edit_format: diff
examples_as_sys_msg: true
- name: openrouter/x-ai/grok-3-beta
use_repo_map: true
edit_format: diff
- name: xai/grok-3-beta
use_repo_map: true
edit_format: diff
accepts_settings:
- reasoning_effort
- name: openrouter/x-ai/grok-3-mini-beta
use_repo_map: true
edit_format: whole
accepts_settings:
- reasoning_effort
- name: xai/grok-3-mini-beta
use_repo_map: true
edit_format: whole
accepts_settings:
- reasoning_effort
- name: openrouter/openrouter/optimus-alpha
use_repo_map: true
edit_format: diff
examples_as_sys_msg: true

View File

@@ -24,6 +24,25 @@ cog.out(text)
]]]-->
### Aider v0.81.2
- Add support for `xai/grok-3-beta`, `xai/grok-3-mini-beta`, `openrouter/x-ai/grok-3-beta`, `openrouter/x-ai/grok-3-mini-beta`, and `openrouter/openrouter/optimus-alpha` models.
- Add alias "grok3" for `xai/grok-3-beta`.
- Add alias "optimus" for `openrouter/openrouter/optimus-alpha`.
- Fix URL extraction from error messages.
- Allow adding files by full path even if a file with the same basename is already in the chat.
- Fix quoting of values containing '#' in the sample `aider.conf.yml`.
- Add support for Fireworks AI model 'deepseek-v3-0324', by Felix Lisczyk.
- Commit messages generated by aider are now lowercase, by Anton Ödman.
- Aider wrote 64% of the code in this release.
### Aider v0.81.1
- Added support for the `gemini/gemini-2.5-pro-preview-03-25` model.
- Updated the `gemini` alias to point to `gemini/gemini-2.5-pro-preview-03-25`.
- Added the `gemini-exp` alias for `gemini/gemini-2.5-pro-exp-03-25`.
- Aider wrote 87% of the code in this release.
### Aider v0.81.0
- Added support for the `openrouter/openrouter/quasar-alpha` model.

View File

@@ -4410,3 +4410,41 @@
Vasil Markoukin: 129
start_tag: v0.79.0
total_lines: 2115
- aider_percentage: 85.55
aider_total: 225
end_date: '2025-04-04'
end_tag: v0.81.0
file_counts:
.github/workflows/check_pypi_version.yml:
Paul Gauthier: 11
Paul Gauthier (aider): 75
.github/workflows/windows_check_pypi_version.yml:
Paul Gauthier: 4
Paul Gauthier (aider): 86
aider/__init__.py:
Paul Gauthier: 1
aider/coders/base_coder.py:
Paul Gauthier (aider): 4
aider/exceptions.py:
Paul Gauthier: 6
Paul Gauthier (aider): 12
aider/main.py:
Paul Gauthier (aider): 40
aider/models.py:
Paul Gauthier (aider): 2
aider/resources/model-settings.yml:
Paul Gauthier: 9
Paul Gauthier (aider): 1
aider/website/_includes/leaderboard.js:
Paul Gauthier (aider): 5
aider/website/docs/leaderboards/index.md:
Paul Gauthier: 1
aider/website/index.html:
Paul Gauthier: 3
tests/basic/test_exceptions.py:
Paul Gauthier: 3
grand_total:
Paul Gauthier: 38
Paul Gauthier (aider): 225
start_tag: v0.80.0
total_lines: 263

View File

@@ -883,4 +883,108 @@
date: 2025-04-04
versions: 0.80.5.dev
seconds_per_case: 14.8
total_cost: 0.0000
- dirname: 2025-04-06-08-39-52--llama-4-maverick-17b-128e-instruct-polyglot-whole
test_cases: 225
model: Llama 4 Maverick
edit_format: whole
commit_hash: 9445a31
pass_rate_1: 4.4
pass_rate_2: 15.6
pass_num_1: 10
pass_num_2: 35
percent_cases_well_formed: 99.1
error_outputs: 12
num_malformed_responses: 2
num_with_malformed_responses: 2
user_asks: 248
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 4
total_tests: 225
command: aider --model nvidia_nim/meta/llama-4-maverick-17b-128e-instruct
date: 2025-04-06
versions: 0.81.2.dev
seconds_per_case: 20.5
total_cost: 0.0000
- dirname: 2025-04-10-04-21-31--grok3-diff-exuser
test_cases: 225
model: Grok 3 Beta
edit_format: diff
commit_hash: 2dd40fc-dirty
pass_rate_1: 22.2
pass_rate_2: 53.3
pass_num_1: 50
pass_num_2: 120
percent_cases_well_formed: 99.6
error_outputs: 1
num_malformed_responses: 1
num_with_malformed_responses: 1
user_asks: 68
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
total_tests: 225
command: aider --model openrouter/x-ai/grok-3-beta
date: 2025-04-10
versions: 0.81.2.dev
seconds_per_case: 15.3
total_cost: 11.0338
- dirname: 2025-04-10-18-47-24--grok3-mini-whole-exuser
test_cases: 225
model: Grok 3 Mini Beta
edit_format: whole
commit_hash: 14ffe77-dirty
pass_rate_1: 11.1
pass_rate_2: 34.7
pass_num_1: 25
pass_num_2: 78
percent_cases_well_formed: 100.0
error_outputs: 3
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 73
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 5
total_tests: 225
command: aider --model openrouter/x-ai/grok-3-mini-beta
date: 2025-04-10
versions: 0.81.2.dev
seconds_per_case: 35.1
total_cost: 0.7856
- dirname: 2025-04-10-19-02-44--oalpha-diff-exsys
test_cases: 225
model: Optimus Alpha
edit_format: diff
commit_hash: 532bc45-dirty
pass_rate_1: 21.3
pass_rate_2: 52.9
pass_num_1: 48
pass_num_2: 119
percent_cases_well_formed: 97.3
error_outputs: 7
num_malformed_responses: 6
num_with_malformed_responses: 6
user_asks: 182
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 3
total_tests: 225
command: aider --model openrouter/openrouter/optimus-alpha
date: 2025-04-10
versions: 0.81.2.dev
seconds_per_case: 18.4
total_cost: 0.0000

View File

@@ -17,14 +17,14 @@ document.addEventListener('DOMContentLoaded', function () {
backgroundColor: function(context) {
const row = allData[context.dataIndex];
if (row && row.edit_format === 'whole') {
return diagonalPattern;
return redDiagonalPattern; // Use red pattern for highlighted whole format
}
const label = leaderboardData.labels[context.dataIndex] || '';
return (label && label.includes(HIGHLIGHT_MODEL)) ? 'rgba(255, 99, 132, 0.2)' : 'rgba(54, 162, 235, 0.2)';
return (label && HIGHLIGHT_MODEL && label.toLowerCase().includes(HIGHLIGHT_MODEL.toLowerCase())) ? 'rgba(255, 99, 132, 0.2)' : 'rgba(54, 162, 235, 0.2)';
},
borderColor: function(context) {
const label = context.chart.data.labels[context.dataIndex] || '';
return (label && label.includes(HIGHLIGHT_MODEL)) ? 'rgba(255, 99, 132, 1)' : 'rgba(54, 162, 235, 1)';
return (label && HIGHLIGHT_MODEL && label.toLowerCase().includes(HIGHLIGHT_MODEL.toLowerCase())) ? 'rgba(255, 99, 132, 1)' : 'rgba(54, 162, 235, 1)';
},
borderWidth: 1
}, {
@@ -78,11 +78,13 @@ document.addEventListener('DOMContentLoaded', function () {
leaderboardChart.render();
}
// Use displayedData in the backgroundColor callback instead of allData
// Update backgroundColor and borderColor for the main dataset based on displayedData
leaderboardData.datasets[0].backgroundColor = function(context) {
const row = displayedData[context.dataIndex];
const label = leaderboardData.labels[context.dataIndex] || '';
if (label && label.includes(HIGHLIGHT_MODEL)) {
const isHighlighted = label && HIGHLIGHT_MODEL && label.toLowerCase().includes(HIGHLIGHT_MODEL.toLowerCase());
if (isHighlighted) {
if (row && row.edit_format === 'whole') return redDiagonalPattern;
else return 'rgba(255, 99, 132, 0.2)';
} else if (row && row.edit_format === 'whole') {

Binary file not shown.

After

Width:  |  Height:  |  Size: 260 KiB

File diff suppressed because it is too large Load Diff

View File

@@ -171,19 +171,19 @@
#stream: true
## Set the color for user input (default: #00cc00)
#user-input-color: #00cc00
#user-input-color: "#00cc00"
## Set the color for tool output (default: None)
#tool-output-color: "xxx"
## Set the color for tool error messages (default: #FF2222)
#tool-error-color: #FF2222
#tool-error-color: "#FF2222"
## Set the color for tool warning messages (default: #FFA500)
#tool-warning-color: #FFA500
#tool-warning-color: "#FFA500"
## Set the color for assistant output (default: #0088ff)
#assistant-output-color: #0088ff
#assistant-output-color: "#0088ff"
## Set the color for the completion menu (default: terminal's default text color)
#completion-menu-color: "xxx"

View File

@@ -569,6 +569,14 @@ cog.out("```\n")
extra_params:
max_tokens: 128000
- name: fireworks_ai/accounts/fireworks/models/deepseek-v3-0324
edit_format: diff
use_repo_map: true
reminder: sys
examples_as_sys_msg: true
extra_params:
max_tokens: 160000
- name: fireworks_ai/accounts/fireworks/models/qwq-32b
edit_format: diff
weak_model_name: fireworks_ai/accounts/fireworks/models/qwen2p5-coder-32b-instruct
@@ -615,6 +623,11 @@ cog.out("```\n")
weak_model_name: gemini/gemini-2.0-flash
use_repo_map: true
- name: gemini/gemini-2.5-pro-preview-03-25
edit_format: diff-fenced
weak_model_name: gemini/gemini-2.0-flash
use_repo_map: true
- name: gemini/gemini-exp-1114
edit_format: diff
use_repo_map: true
@@ -1097,6 +1110,11 @@ cog.out("```\n")
accepts_settings:
- reasoning_effort
- name: openrouter/openrouter/optimus-alpha
edit_format: diff
use_repo_map: true
examples_as_sys_msg: true
- name: openrouter/openrouter/quasar-alpha
edit_format: diff
use_repo_map: true
@@ -1109,6 +1127,15 @@ cog.out("```\n")
editor_model_name: openrouter/qwen/qwen-2.5-coder-32b-instruct
editor_edit_format: editor-diff
- name: openrouter/x-ai/grok-3-beta
edit_format: diff
use_repo_map: true
- name: openrouter/x-ai/grok-3-mini-beta
use_repo_map: true
accepts_settings:
- reasoning_effort
- name: vertex_ai-anthropic_models/vertex_ai/claude-3-7-sonnet@20250219
edit_format: diff
weak_model_name: vertex_ai/claude-3-5-haiku@20241022
@@ -1174,9 +1201,24 @@ cog.out("```\n")
edit_format: diff-fenced
use_repo_map: true
- name: vertex_ai/gemini-2.5-pro-preview-03-25
edit_format: diff-fenced
use_repo_map: true
- name: vertex_ai/gemini-pro-experimental
edit_format: diff-fenced
use_repo_map: true
- name: xai/grok-3-beta
edit_format: diff
use_repo_map: true
accepts_settings:
- reasoning_effort
- name: xai/grok-3-mini-beta
use_repo_map: true
accepts_settings:
- reasoning_effort
```
<!--[[[end]]]-->

View File

@@ -225,19 +225,19 @@ cog.outl("```")
#stream: true
## Set the color for user input (default: #00cc00)
#user-input-color: #00cc00
#user-input-color: "#00cc00"
## Set the color for tool output (default: None)
#tool-output-color: "xxx"
## Set the color for tool error messages (default: #FF2222)
#tool-error-color: #FF2222
#tool-error-color: "#FF2222"
## Set the color for tool warning messages (default: #FFA500)
#tool-warning-color: #FFA500
#tool-warning-color: "#FFA500"
## Set the color for assistant output (default: #0088ff)
#assistant-output-color: #0088ff
#assistant-output-color: "#0088ff"
## Set the color for the completion menu (default: terminal's default text color)
#completion-menu-color: "xxx"

View File

@@ -80,9 +80,12 @@ for alias, model in sorted(MODEL_ALIASES.items()):
- `4o`: gpt-4o
- `deepseek`: deepseek/deepseek-chat
- `flash`: gemini/gemini-2.0-flash-exp
- `gemini`: gemini/gemini-2.5-pro-exp-03-25
- `gemini`: gemini/gemini-2.5-pro-preview-03-25
- `gemini-2.5-pro`: gemini/gemini-2.5-pro-exp-03-25
- `gemini-exp`: gemini/gemini-2.5-pro-exp-03-25
- `grok3`: xai/grok-3-beta
- `haiku`: claude-3-5-haiku-20241022
- `optimus`: openrouter/openrouter/optimus-alpha
- `opus`: claude-3-opus-20240229
- `quasar`: openrouter/openrouter/quasar-alpha
- `r1`: deepseek/deepseek-reasoner

View File

@@ -264,12 +264,12 @@ tr:hover { background-color: #f5f5f5; }
</style>
<table>
<tr><th>Model Name</th><th class='right'>Total Tokens</th><th class='right'>Percent</th></tr>
<tr><td>gemini/gemini-2.5-pro-exp-03-25</td><td class='right'>900,070</td><td class='right'>83.0%</td></tr>
<tr><td>anthropic/claude-3-7-sonnet-20250219</td><td class='right'>144,371</td><td class='right'>13.3%</td></tr>
<tr><td>openrouter/openrouter/quasar-alpha</td><td class='right'>25,535</td><td class='right'>2.4%</td></tr>
<tr><td>openrouter/deepseek/deepseek-chat-v3-0324:free</td><td class='right'>11,324</td><td class='right'>1.0%</td></tr>
<tr><td>openrouter/REDACTED</td><td class='right'>1,755</td><td class='right'>0.2%</td></tr>
<tr><td>vertex_ai/REDACTED</td><td class='right'>1,739</td><td class='right'>0.2%</td></tr>
<tr><td>gemini/gemini-2.5-pro-exp-03-25</td><td class='right'>1,174,432</td><td class='right'>77.7%</td></tr>
<tr><td>gemini/gemini-2.5-pro-preview-03-25</td><td class='right'>269,898</td><td class='right'>17.9%</td></tr>
<tr><td>openrouter/openrouter/quasar-alpha</td><td class='right'>36,847</td><td class='right'>2.4%</td></tr>
<tr><td>openrouter/x-ai/grok-3-mini-beta</td><td class='right'>16,987</td><td class='right'>1.1%</td></tr>
<tr><td>openrouter/deepseek/deepseek-chat-v3-0324:free</td><td class='right'>11,324</td><td class='right'>0.7%</td></tr>
<tr><td>openrouter/REDACTED</td><td class='right'>1,843</td><td class='right'>0.1%</td></tr>
</table>
{: .note :}

View File

@@ -124,6 +124,6 @@ mod_dates = [get_last_modified_date(file) for file in files]
latest_mod_date = max(mod_dates)
cog.out(f"{latest_mod_date.strftime('%B %d, %Y.')}")
]]]-->
April 04, 2025.
April 11, 2025.
<!--[[[end]]]-->
</p>

View File

@@ -12,16 +12,16 @@ python -m pip install -U aider-chat
# Mac/Linux:
export AZURE_API_KEY=<key>
export AZURE_API_VERSION=2023-05-15
export AZURE_API_VERSION=2024-12-01-preview
export AZURE_API_BASE=https://myendpt.openai.azure.com
# Windows
setx AZURE_API_KEY <key>
setx AZURE_API_VERSION 2023-05-15
setx AZURE_API_VERSION 2024-12-01-preview
setx AZURE_API_BASE https://myendpt.openai.azure.com
# ... restart your shell after setx commands
aider --model azure/<your_deployment_name>
aider --model azure/<your_model_deployment_name>
# List models available from Azure
aider --list-models azure/
@@ -29,3 +29,9 @@ aider --list-models azure/
Note that aider will also use environment variables
like `AZURE_OPENAI_API_xxx`.
The `aider --list-models azure/` command will list all models that aider supports through Azure, not the models that are available for the provided endpoint.
When setting the model to use with `--model azure/<your_model_deployment_name>`, `<your_model_deployment_name>` is likely just the name of the model you have deployed to the endpoint for example `o3-mini` or `gpt-4o`. The screenshow below shows `o3-mini` and `gpt-4o` deployments in the Azure portal done under the `myendpt` resource.
![example azure deployment](/assets/azure-deployment.png)

View File

@@ -5,7 +5,27 @@ nav_order: 28
# Models and API keys
You need to tell aider which LLM to use and provide an API key.
Aider needs to know which LLM model you would like to work with and which keys
to provide when accessing it via API.
## Defaults
If you don't explicitly name a model, aider will try to select a model
for you to work with.
First, aider will check which
[keys you have provided via the environment, config files, or command line arguments](https://aider.chat/docs/config/api-keys.html).
Based on the available keys, aider will select the best model to use.
If you have not provided any keys, aider will offer to help you connect to
[OpenRouter](http://openrouter.ai)
which provides both free and paid access to most popular LLMs.
Once connected, aider will select the best model available on OpenRouter
based on whether you have a free or paid account there.
## Specifying model & key
You can also tell aider which LLM to use and provide an API key.
The easiest way is to use the `--model` and `--api-key`
command line arguments, like this:

View File

@@ -73,7 +73,7 @@ cog.out(text)
</a>
<a href="https://pypi.org/project/aider-chat/" class="github-badge badge-installs" title="Total number of installations via pip from PyPI">
<span class="badge-label">📦 Installs</span>
<span class="badge-value">1.8M</span>
<span class="badge-value">1.9M</span>
</a>
<div class="github-badge badge-tokens" title="Number of tokens processed weekly by Aider users">
<span class="badge-label">📈 Tokens/week</span>
@@ -85,7 +85,7 @@ cog.out(text)
</a>
<a href="/HISTORY.html" class="github-badge badge-coded" title="Percentage of the new code in Aider's last release written by Aider itself">
<span class="badge-label">🔄 Singularity</span>
<span class="badge-value">87%</span>
<span class="badge-value">86%</span>
</a>
<!--[[[end]]]-->
</div>

View File

@@ -81,15 +81,20 @@ def main():
parser.add_argument(
"--dry-run", action="store_true", help="Print each step without actually executing them"
)
parser.add_argument("--force", action="store_true", help="Skip pre-push checks")
args = parser.parse_args()
dry_run = args.dry_run
force = args.force
# Perform checks before proceeding
check_branch()
check_working_directory_clean()
check_main_branch_up_to_date()
check_ok_to_push()
# Perform checks before proceeding unless --force is used
if not force:
check_branch()
check_working_directory_clean()
check_main_branch_up_to_date()
check_ok_to_push()
else:
print("Skipping pre-push checks due to --force flag.")
new_version_str = args.new_version
if not re.match(r"^\d+\.\d+\.\d+$", new_version_str):

View File

@@ -194,8 +194,8 @@ class TestCoder(unittest.TestCase):
mock.return_value = set([str(fname1), str(fname2), str(fname3)])
coder.repo.get_tracked_files = mock
# Check that file mentions skip files with duplicate basenames
mentioned = coder.get_file_mentions(f"Check {fname2} and {fname3}")
# Check that file mentions of a pure basename skips files with duplicate basenames
mentioned = coder.get_file_mentions(f"Check {fname2.name} and {fname3}")
self.assertEqual(mentioned, {str(fname3)})
# Add a read-only file with same basename
@@ -366,6 +366,45 @@ class TestCoder(unittest.TestCase):
f"Failed to extract mentions from: {content}",
)
def test_get_file_mentions_multiline_backticks(self):
with GitTemporaryDirectory():
io = InputOutput(pretty=False, yes=True)
coder = Coder.create(self.GPT35, None, io)
# Create test files
test_files = [
"swebench/harness/test_spec/python.py",
"swebench/harness/test_spec/javascript.py",
]
for fname in test_files:
fpath = Path(fname)
fpath.parent.mkdir(parents=True, exist_ok=True)
fpath.touch()
# Mock get_addable_relative_files to return our test files
coder.get_addable_relative_files = MagicMock(return_value=set(test_files))
# Input text with multiline backticked filenames
content = """
Could you please **add the following files to the chat**?
1. `swebench/harness/test_spec/python.py`
2. `swebench/harness/test_spec/javascript.py`
Once I have these, I can show you precisely how to do the thing.
"""
expected_mentions = {
"swebench/harness/test_spec/python.py",
"swebench/harness/test_spec/javascript.py",
}
mentioned_files = coder.get_file_mentions(content)
self.assertEqual(
mentioned_files,
expected_mentions,
f"Failed to extract mentions from multiline backticked content: {content}",
)
def test_get_file_mentions_path_formats(self):
with GitTemporaryDirectory():
io = InputOutput(pretty=False, yes=True)