mirror of
https://github.com/Aider-AI/aider
synced 2026-05-05 06:32:04 +02:00
Compare commits
938 Commits
v0.71.1.de
...
v0.76.1.de
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
11b71fa28c | ||
|
|
c838f9bfd5 | ||
|
|
8d073ce221 | ||
|
|
74ecdf2d3f | ||
|
|
20eacfab0f | ||
|
|
0396e15a3b | ||
|
|
3432a936ea | ||
|
|
87cd2b5dfe | ||
|
|
313b91edbe | ||
|
|
a1f104cb4d | ||
|
|
eadb8d5d0a | ||
|
|
41ae947885 | ||
|
|
84f610c0e9 | ||
|
|
0df959cf68 | ||
|
|
a15d10ea1e | ||
|
|
a37d6e86df | ||
|
|
a405063385 | ||
|
|
f5a5b85e9d | ||
|
|
ba7d941e5b | ||
|
|
804a2d1af9 | ||
|
|
c1bc6e161e | ||
|
|
af1b728b90 | ||
|
|
14e37a82ab | ||
|
|
f8a7854efa | ||
|
|
072ce87051 | ||
|
|
cac9b4460e | ||
|
|
67bf90a149 | ||
|
|
af8558b19e | ||
|
|
1903542f11 | ||
|
|
3ed16fb796 | ||
|
|
6f99392eda | ||
|
|
680dbfbf77 | ||
|
|
51a72b497b | ||
|
|
d6e57dd194 | ||
|
|
e92ab55da6 | ||
|
|
c78b3e0204 | ||
|
|
ac1c05389a | ||
|
|
95583fe2cd | ||
|
|
ddedda9233 | ||
|
|
d30b9d1513 | ||
|
|
4c35f88ea0 | ||
|
|
e6623ae0a8 | ||
|
|
4755578822 | ||
|
|
319d543ac2 | ||
|
|
9e668cda7f | ||
|
|
5447483da2 | ||
|
|
8e22a8d107 | ||
|
|
18d27ab4e4 | ||
|
|
fe60832492 | ||
|
|
6bf683409f | ||
|
|
634bfb1eae | ||
|
|
c9d597d2b1 | ||
|
|
92c616f717 | ||
|
|
b1e8d29ae0 | ||
|
|
e0cef55fcd | ||
|
|
9aacf5c7db | ||
|
|
4858749a20 | ||
|
|
b53c0b982a | ||
|
|
2aac9ff9c5 | ||
|
|
bdaa70ada5 | ||
|
|
ca6abdfc61 | ||
|
|
e18593fe88 | ||
|
|
08401aff26 | ||
|
|
cddc67ad69 | ||
|
|
586af2a435 | ||
|
|
97091fab60 | ||
|
|
5f2cf75be8 | ||
|
|
37c7b81c95 | ||
|
|
779a266713 | ||
|
|
ebaedc6f05 | ||
|
|
08a392787a | ||
|
|
883bf74bad | ||
|
|
80de3335b7 | ||
|
|
794072bdf8 | ||
|
|
e28fdb9cb1 | ||
|
|
7873d1c6b3 | ||
|
|
f8c069132e | ||
|
|
c53833072f | ||
|
|
16d7cf7a52 | ||
|
|
3b9c2b9729 | ||
|
|
b230fea66f | ||
|
|
f9b6501af1 | ||
|
|
0cd8e3701d | ||
|
|
8545672839 | ||
|
|
4a6c4b95f1 | ||
|
|
c893bc21ab | ||
|
|
c8c5cbf8cc | ||
|
|
54ef8a1e19 | ||
|
|
82df218bcb | ||
|
|
f613ad6c05 | ||
|
|
4e732d0379 | ||
|
|
ad8b5c9d29 | ||
|
|
1ab4bf14dc | ||
|
|
068a0b4576 | ||
|
|
5f694f228f | ||
|
|
2ffe49130d | ||
|
|
f7d18ef976 | ||
|
|
8233eb6007 | ||
|
|
de4693cdf3 | ||
|
|
8fb235c3f5 | ||
|
|
6feb00dcd9 | ||
|
|
4fc1847a70 | ||
|
|
e7f16f07f7 | ||
|
|
d9551b3106 | ||
|
|
854428795b | ||
|
|
5c3b4bd987 | ||
|
|
9d686d3e52 | ||
|
|
b62e00b935 | ||
|
|
634745c818 | ||
|
|
490c6d9a28 | ||
|
|
e6dd9978cb | ||
|
|
240a5613a5 | ||
|
|
fb96cbcaaf | ||
|
|
b58f879db7 | ||
|
|
1585c6095e | ||
|
|
30a630412d | ||
|
|
e5ca79cd51 | ||
|
|
148353aca4 | ||
|
|
7a098ce740 | ||
|
|
f3b9831a0c | ||
|
|
01454674c8 | ||
|
|
5093b18ecc | ||
|
|
c16cfd0668 | ||
|
|
1674cd5db9 | ||
|
|
f111ab48fb | ||
|
|
65309854ac | ||
|
|
cf0aff8c40 | ||
|
|
16b768485a | ||
|
|
c2e7b533d3 | ||
|
|
539859f1ab | ||
|
|
d1d40a9a76 | ||
|
|
52162a5604 | ||
|
|
84e84207a5 | ||
|
|
a4e1745eca | ||
|
|
5931979b74 | ||
|
|
a412a65315 | ||
|
|
25da0674bb | ||
|
|
c823bf4fbb | ||
|
|
cfd0e67a6b | ||
|
|
94f3af57f1 | ||
|
|
0050a3fe6c | ||
|
|
0e65ddee37 | ||
|
|
101f7de889 | ||
|
|
a21c1ff92d | ||
|
|
f9bb2e498e | ||
|
|
f6bb803be5 | ||
|
|
204a88c171 | ||
|
|
012afc0708 | ||
|
|
cf089abb64 | ||
|
|
40e463cdc1 | ||
|
|
6a1284a5ca | ||
|
|
60522ee474 | ||
|
|
0045641db7 | ||
|
|
97b5b1b669 | ||
|
|
448de8519a | ||
|
|
95e1fe0446 | ||
|
|
47254be254 | ||
|
|
3da15bfd19 | ||
|
|
c79db2581b | ||
|
|
93b86a8800 | ||
|
|
56ba7ef411 | ||
|
|
e2117fd8a9 | ||
|
|
e817c76e38 | ||
|
|
65e059a7d2 | ||
|
|
f661025acc | ||
|
|
2fe1b1e16e | ||
|
|
665ffe3984 | ||
|
|
c3401047e0 | ||
|
|
996177ceaf | ||
|
|
09e998523f | ||
|
|
38e8d27416 | ||
|
|
813de04596 | ||
|
|
3c0eae4180 | ||
|
|
99424a9f53 | ||
|
|
51d118fdb5 | ||
|
|
a26509a1fd | ||
|
|
0db70379e8 | ||
|
|
c612b5d17b | ||
|
|
1b469cce49 | ||
|
|
c62cbd2d77 | ||
|
|
da1bc19052 | ||
|
|
8e2246ec5c | ||
|
|
5cf6945bcb | ||
|
|
7132ae47d7 | ||
|
|
96bde4ad03 | ||
|
|
85b9bdd8f4 | ||
|
|
e5a85108d7 | ||
|
|
aaa3a8ebda | ||
|
|
9ceb766a67 | ||
|
|
f894240fbb | ||
|
|
4bac8e2ebe | ||
|
|
d3ad1fd384 | ||
|
|
ed0e4189e4 | ||
|
|
5f147242be | ||
|
|
c7b4c22b94 | ||
|
|
667bacf81e | ||
|
|
e896b0ea96 | ||
|
|
3b0a5a8b41 | ||
|
|
81d39e9bde | ||
|
|
59eabf03a6 | ||
|
|
4fc4987c43 | ||
|
|
dbf5bb149a | ||
|
|
38acbf6970 | ||
|
|
ad4bd91751 | ||
|
|
1a6f290979 | ||
|
|
c6e02a620a | ||
|
|
90efaa41c2 | ||
|
|
51a73ad8b5 | ||
|
|
93f2387d1b | ||
|
|
207a631a65 | ||
|
|
74e60e98b7 | ||
|
|
6ca6bf7457 | ||
|
|
ea49cdeb17 | ||
|
|
bcc8b1917a | ||
|
|
67b12d4416 | ||
|
|
dc02daecee | ||
|
|
6212b38ea6 | ||
|
|
ce7e5726e7 | ||
|
|
1156b3f22e | ||
|
|
66097f3507 | ||
|
|
146f02d314 | ||
|
|
77e5882ce7 | ||
|
|
d44850a4f3 | ||
|
|
e6e692dc43 | ||
|
|
dc65770ae3 | ||
|
|
8c15802277 | ||
|
|
3d666d9929 | ||
|
|
c0c960ec2e | ||
|
|
2bb4db127c | ||
|
|
dd1a5d4f58 | ||
|
|
961fdf7029 | ||
|
|
ff3d2b006f | ||
|
|
d7efbad3df | ||
|
|
d70995bb1a | ||
|
|
1b58e95dce | ||
|
|
780f70d5c6 | ||
|
|
a01e1f96fa | ||
|
|
3adb443ca5 | ||
|
|
dc9ff3a004 | ||
|
|
f879f4f432 | ||
|
|
183f831a7e | ||
|
|
3c361be621 | ||
|
|
5764d44faf | ||
|
|
ce86677faa | ||
|
|
17d93b39d5 | ||
|
|
1357b85a3d | ||
|
|
c67cb5c604 | ||
|
|
6ffb0df6cb | ||
|
|
032b40c78d | ||
|
|
742aea115b | ||
|
|
0f16cd46f9 | ||
|
|
eea64cf272 | ||
|
|
3d5c5f8054 | ||
|
|
748099a324 | ||
|
|
9c1d050d8b | ||
|
|
4ef834e295 | ||
|
|
50bead172b | ||
|
|
ee4508af03 | ||
|
|
6638959d66 | ||
|
|
f266a9d25d | ||
|
|
6cb8e1a518 | ||
|
|
85375359ed | ||
|
|
17c9ba2c68 | ||
|
|
34334ad8b8 | ||
|
|
4527714094 | ||
|
|
b43d74dbb7 | ||
|
|
0c4140ff02 | ||
|
|
b074c02fa2 | ||
|
|
7636c97f9f | ||
|
|
4211ab28b0 | ||
|
|
cecfbc7e20 | ||
|
|
31a6aff932 | ||
|
|
c4a67c4356 | ||
|
|
9f5765134b | ||
|
|
0c5b51d2ac | ||
|
|
31c4198cee | ||
|
|
a94c4b4ce4 | ||
|
|
088dd99ec1 | ||
|
|
4f9b907b4d | ||
|
|
e7dc3e6062 | ||
|
|
53055e78eb | ||
|
|
9a9c34aa18 | ||
|
|
2f1384840c | ||
|
|
b462e55799 | ||
|
|
263ec60ba6 | ||
|
|
8d44a57200 | ||
|
|
976722c129 | ||
|
|
4a9447d344 | ||
|
|
ac2ed9aa87 | ||
|
|
51cf241dae | ||
|
|
f239b8e26d | ||
|
|
ab9f4161ea | ||
|
|
1d10e649b7 | ||
|
|
a95b40aac6 | ||
|
|
1b5777821f | ||
|
|
587d469193 | ||
|
|
6c3e30f3ea | ||
|
|
91dbcae9e2 | ||
|
|
fb5db4f6b7 | ||
|
|
a1e029a825 | ||
|
|
54dbf9b6f2 | ||
|
|
b6344951fe | ||
|
|
ea972118b5 | ||
|
|
2fccd4799d | ||
|
|
a3937e4d0d | ||
|
|
acb022d5d5 | ||
|
|
b6e46d6101 | ||
|
|
347f75f804 | ||
|
|
4005ced505 | ||
|
|
c748c35b37 | ||
|
|
a73836ca43 | ||
|
|
b357fab326 | ||
|
|
16a3000451 | ||
|
|
c4fac2d179 | ||
|
|
60d11a6eba | ||
|
|
93edbda984 | ||
|
|
75bd94d757 | ||
|
|
a5cf0b6ef5 | ||
|
|
506280d645 | ||
|
|
2f79b4fde7 | ||
|
|
846f98628d | ||
|
|
eed9be5a9e | ||
|
|
27c77afafc | ||
|
|
c156b2f817 | ||
|
|
96fcc5df6b | ||
|
|
3c775fd5de | ||
|
|
75e9ee6528 | ||
|
|
ea0ee96398 | ||
|
|
3fd4a2841a | ||
|
|
6ecf44c87a | ||
|
|
031e8cea6e | ||
|
|
757fbb0124 | ||
|
|
d65e3f73df | ||
|
|
5b13105d58 | ||
|
|
c8745afb37 | ||
|
|
85189c0bde | ||
|
|
a8635bade2 | ||
|
|
4560572ff2 | ||
|
|
c7fa57fd14 | ||
|
|
54965fdf2e | ||
|
|
30361aa685 | ||
|
|
8be93b72c4 | ||
|
|
c7e9d645e5 | ||
|
|
fe6a3c89f3 | ||
|
|
686a32cbc0 | ||
|
|
55d7397ff5 | ||
|
|
3714d554df | ||
|
|
0415de853b | ||
|
|
0ba1e8f904 | ||
|
|
58bfcb0953 | ||
|
|
fa281d89d2 | ||
|
|
908b10dae0 | ||
|
|
ea03f9def0 | ||
|
|
3510799fca | ||
|
|
1f4a63d6db | ||
|
|
dd94a444d2 | ||
|
|
50fafc9ff6 | ||
|
|
47fc6a689d | ||
|
|
86175a1827 | ||
|
|
6d6e25df4e | ||
|
|
5402ed112c | ||
|
|
235b83d02e | ||
|
|
6ffbec969a | ||
|
|
185ea71646 | ||
|
|
69fcc3acd7 | ||
|
|
da94cf4aab | ||
|
|
8799cf95b4 | ||
|
|
108ce18d51 | ||
|
|
f67ea5d010 | ||
|
|
dd857aeccf | ||
|
|
44b1acd385 | ||
|
|
b2f6018e05 | ||
|
|
bca6507f11 | ||
|
|
30332c2ba5 | ||
|
|
17919d7503 | ||
|
|
42237ced80 | ||
|
|
737021ccdf | ||
|
|
22ed9d8d7c | ||
|
|
49dcd11813 | ||
|
|
7c30086d78 | ||
|
|
e2dbfdc537 | ||
|
|
674eb109c2 | ||
|
|
927b5bc8cc | ||
|
|
f7dd0fc582 | ||
|
|
35f30bde04 | ||
|
|
a682b50fd4 | ||
|
|
3b5024749f | ||
|
|
2a56d892d7 | ||
|
|
e3d5eaf388 | ||
|
|
5d1f50117b | ||
|
|
f6a2ec15d7 | ||
|
|
64a8d56725 | ||
|
|
71caea32e7 | ||
|
|
17993ef9ff | ||
|
|
b0aa4ef4c8 | ||
|
|
5c4aaa27d9 | ||
|
|
53586d95d0 | ||
|
|
3877ab1f00 | ||
|
|
2425322e8d | ||
|
|
3f80a113d1 | ||
|
|
9ad20849d3 | ||
|
|
c8c58280d8 | ||
|
|
d40505cd16 | ||
|
|
25c5f84090 | ||
|
|
a58293f04b | ||
|
|
1408fb41b8 | ||
|
|
cb7cb8e527 | ||
|
|
d750dbc703 | ||
|
|
91b417138a | ||
|
|
db5eabd927 | ||
|
|
cbcc0fde04 | ||
|
|
cca3b98a09 | ||
|
|
e63b8ff35d | ||
|
|
b6a37bf0e2 | ||
|
|
249ca4fd75 | ||
|
|
d382869b98 | ||
|
|
41a3c27aba | ||
|
|
af48c46c30 | ||
|
|
52bc51a197 | ||
|
|
57ca9cc840 | ||
|
|
56d6a47ad3 | ||
|
|
b806360a49 | ||
|
|
739a88ed00 | ||
|
|
38d4341e59 | ||
|
|
6118d91922 | ||
|
|
71ac7efafe | ||
|
|
cf0710225c | ||
|
|
21e96df85a | ||
|
|
79f32c2ebd | ||
|
|
492a1f69b3 | ||
|
|
32b962e186 | ||
|
|
37beb8e6b2 | ||
|
|
1ee9f3815d | ||
|
|
65a5e8721c | ||
|
|
036c7a2117 | ||
|
|
229e8e1ad1 | ||
|
|
390bb1bdc5 | ||
|
|
83b401b241 | ||
|
|
cfb2c1f62a | ||
|
|
4ad7df746e | ||
|
|
a218b1d3d0 | ||
|
|
6f61aff735 | ||
|
|
4893f78286 | ||
|
|
97296f3169 | ||
|
|
ebcf4364f5 | ||
|
|
6d0078d39b | ||
|
|
9b80b693c1 | ||
|
|
2e1e26fdb9 | ||
|
|
ddeb43783c | ||
|
|
b61e527baa | ||
|
|
53ce96b48f | ||
|
|
36ea166c20 | ||
|
|
f9fd4c71f1 | ||
|
|
44171417e3 | ||
|
|
b554a46a4c | ||
|
|
19a2c37678 | ||
|
|
23d74040ed | ||
|
|
685e63b9da | ||
|
|
39855f4d2b | ||
|
|
ae6fc41ca9 | ||
|
|
1bb41bec2a | ||
|
|
041d679a54 | ||
|
|
46058c275c | ||
|
|
b9e15a1340 | ||
|
|
f9eb4ffee2 | ||
|
|
419952f33b | ||
|
|
af8bdcd9e0 | ||
|
|
54122af9d7 | ||
|
|
5e4852bd32 | ||
|
|
3714f9fdbd | ||
|
|
a9dd6e0f3d | ||
|
|
3c9f4ee555 | ||
|
|
7ff0b4c6b9 | ||
|
|
648662469b | ||
|
|
c37ddd7872 | ||
|
|
17f35cde19 | ||
|
|
b5d17b99df | ||
|
|
5c9746e209 | ||
|
|
51938affc2 | ||
|
|
856006a68d | ||
|
|
a2622263ce | ||
|
|
7db1613b1a | ||
|
|
3add686e9b | ||
|
|
a9f0983f0f | ||
|
|
3b16d6c291 | ||
|
|
85399bd6e2 | ||
|
|
aef2b95d41 | ||
|
|
11a233da84 | ||
|
|
016aa87e34 | ||
|
|
9094af565f | ||
|
|
d7de908c66 | ||
|
|
a3985ac94c | ||
|
|
b48f26020a | ||
|
|
630d3679b5 | ||
|
|
78c89eb29b | ||
|
|
7fe4996bbe | ||
|
|
370deda5a7 | ||
|
|
d0d8ff8313 | ||
|
|
550b9ebf4d | ||
|
|
2265456bda | ||
|
|
b0f1cde33f | ||
|
|
cdd150be42 | ||
|
|
0d24d75d8f | ||
|
|
5c866c67b5 | ||
|
|
b49fea87ab | ||
|
|
1c262d22ce | ||
|
|
0dde77009e | ||
|
|
3e71c35fdd | ||
|
|
10f1fc5e92 | ||
|
|
8fbad757bf | ||
|
|
5755aa3eb8 | ||
|
|
f76d14f613 | ||
|
|
0c3470bab2 | ||
|
|
315ad06ecc | ||
|
|
c1627612cf | ||
|
|
b5cfceeed6 | ||
|
|
7fe7dd743c | ||
|
|
3e36f27987 | ||
|
|
b9f4f3f71c | ||
|
|
ff1230c3ae | ||
|
|
e71ec574e1 | ||
|
|
c2e716ec4a | ||
|
|
5ad8bb1830 | ||
|
|
751e78baa9 | ||
|
|
be620bd437 | ||
|
|
44365651a6 | ||
|
|
7b557c0586 | ||
|
|
495a27c0a7 | ||
|
|
e07fddb20b | ||
|
|
56eb1d106f | ||
|
|
c8b6d61ae2 | ||
|
|
47e91e943c | ||
|
|
4f8c52f09e | ||
|
|
f20b32b01b | ||
|
|
37cbb5ed01 | ||
|
|
289e13cb46 | ||
|
|
fb03c4c311 | ||
|
|
a65aecaf74 | ||
|
|
da9ba0a26a | ||
|
|
8440e881c0 | ||
|
|
85fa8a4761 | ||
|
|
5c8c78ca69 | ||
|
|
e9097c3b29 | ||
|
|
d53ee24741 | ||
|
|
6517cb15ef | ||
|
|
7b78f92feb | ||
|
|
0af6dc3838 | ||
|
|
e313a2ea45 | ||
|
|
f21ef30482 | ||
|
|
606fce65ab | ||
|
|
b4084484ff | ||
|
|
80062908d9 | ||
|
|
af8f7e95b0 | ||
|
|
9553478384 | ||
|
|
535b3ce286 | ||
|
|
cfe9c86edd | ||
|
|
ee66044425 | ||
|
|
30d56e1af0 | ||
|
|
354630770b | ||
|
|
74da63e3ca | ||
|
|
faa438bc51 | ||
|
|
6de6fb1932 | ||
|
|
6a8acefa30 | ||
|
|
ddec8325e7 | ||
|
|
b1852526f5 | ||
|
|
20aaf58ee9 | ||
|
|
b3db597c4b | ||
|
|
d302f228f9 | ||
|
|
74d5e2b0c1 | ||
|
|
dd42d24d8a | ||
|
|
5692fb32cd | ||
|
|
dbf80d564b | ||
|
|
72b82a8d19 | ||
|
|
c3beaedaa6 | ||
|
|
db694b20df | ||
|
|
34227ce738 | ||
|
|
24b1360eb8 | ||
|
|
60aff26d94 | ||
|
|
144bdf7dc7 | ||
|
|
8db4bb298e | ||
|
|
028477f34d | ||
|
|
6725c9e3cd | ||
|
|
a14dee5b8d | ||
|
|
2f8a1fc58f | ||
|
|
f250c4310e | ||
|
|
ad46e8a5e0 | ||
|
|
1e7031e5f4 | ||
|
|
8c736e979d | ||
|
|
335742a023 | ||
|
|
384ff3484c | ||
|
|
e17c29c258 | ||
|
|
e7d979ca74 | ||
|
|
bc2f38c790 | ||
|
|
88ee089d86 | ||
|
|
d9adaa5020 | ||
|
|
4a963adbcb | ||
|
|
56ac57b4cf | ||
|
|
cdbe2393c4 | ||
|
|
2f4490d059 | ||
|
|
447660504c | ||
|
|
5e44d18d54 | ||
|
|
7a9edae227 | ||
|
|
81b7bd35f4 | ||
|
|
4b946a23ca | ||
|
|
5ab92b1833 | ||
|
|
1a6a16e061 | ||
|
|
061b602334 | ||
|
|
f7deb02560 | ||
|
|
9dfe85eca3 | ||
|
|
cd5823d9f6 | ||
|
|
1af0a6cc8f | ||
|
|
9ed8ebab78 | ||
|
|
7f82a33bf5 | ||
|
|
41a7e5c915 | ||
|
|
8d22c0ba90 | ||
|
|
4636ae7237 | ||
|
|
ee9d0c4a99 | ||
|
|
476a0ad6ad | ||
|
|
14612fc116 | ||
|
|
5c87787351 | ||
|
|
3fa796382e | ||
|
|
f4f4761517 | ||
|
|
c78de41ccf | ||
|
|
b0d58d10bd | ||
|
|
f778741ee3 | ||
|
|
2fb517b293 | ||
|
|
aecc32fbfb | ||
|
|
64c8c0590c | ||
|
|
24dc436122 | ||
|
|
1e1fef52c4 | ||
|
|
c8b9e2ff37 | ||
|
|
34a6902986 | ||
|
|
0c47b0eb53 | ||
|
|
5d4ef7d009 | ||
|
|
31d23bc9a7 | ||
|
|
b4b54d1796 | ||
|
|
5023dfeb24 | ||
|
|
dad3092d8d | ||
|
|
ab77c032de | ||
|
|
778e54ef32 | ||
|
|
699db63615 | ||
|
|
01d0e13884 | ||
|
|
d743c196be | ||
|
|
c60ec18f34 | ||
|
|
1be1274d40 | ||
|
|
a65b49ea30 | ||
|
|
762d14c5a1 | ||
|
|
bd44c52cbb | ||
|
|
4cbdd27862 | ||
|
|
62cf42efb4 | ||
|
|
47dc30ea79 | ||
|
|
55abdff58c | ||
|
|
27cde532be | ||
|
|
9c2bd58488 | ||
|
|
a3c898fc4f | ||
|
|
da380119ef | ||
|
|
db631e3d57 | ||
|
|
fb63f9cc92 | ||
|
|
31e738a5a3 | ||
|
|
f3b1b351e8 | ||
|
|
149ecb380b | ||
|
|
67a43ff549 | ||
|
|
69f29d6fac | ||
|
|
51c12ef745 | ||
|
|
d0e89ec72a | ||
|
|
74c8b381e6 | ||
|
|
ddbd4236ab | ||
|
|
c0cbb5c75d | ||
|
|
717d00e64a | ||
|
|
19e9e52c4f | ||
|
|
1df2465222 | ||
|
|
70883d7fdc | ||
|
|
f3f5f0f896 | ||
|
|
a7828809e9 | ||
|
|
b80a2b0bc2 | ||
|
|
88d897eb14 | ||
|
|
2e9f562329 | ||
|
|
7aa6a30169 | ||
|
|
298f713e9b | ||
|
|
aa339d0851 | ||
|
|
ebb8596f03 | ||
|
|
fb57d3beef | ||
|
|
8488175ee8 | ||
|
|
e68191dcd9 | ||
|
|
0019a1f7dc | ||
|
|
5095a9e1c3 | ||
|
|
ddbaa8b32b | ||
|
|
9f7275eced | ||
|
|
3d8e6823f7 | ||
|
|
1368348cd9 | ||
|
|
b31cf90596 | ||
|
|
66025a06d5 | ||
|
|
65c8504141 | ||
|
|
cd16e001f6 | ||
|
|
77d2bc58fd | ||
|
|
bfc57459e1 | ||
|
|
3422718415 | ||
|
|
0b5e0a1113 | ||
|
|
b6b44e0f2d | ||
|
|
b642543600 | ||
|
|
095a05a8e1 | ||
|
|
4783204f31 | ||
|
|
82d819a6c7 | ||
|
|
10e7875680 | ||
|
|
2aad566857 | ||
|
|
a3e0a3ff1a | ||
|
|
8fe9ad80bb | ||
|
|
531262387d | ||
|
|
a73cd87b50 | ||
|
|
4601940f8d | ||
|
|
6e5b2c7368 | ||
|
|
8a3cc6041d | ||
|
|
25687c2db1 | ||
|
|
871229d0c5 | ||
|
|
74d179e479 | ||
|
|
910d384ed8 | ||
|
|
da89d6ab9c | ||
|
|
8d2159761f | ||
|
|
d434f8641d | ||
|
|
f49733d1d2 | ||
|
|
a3726d72f5 | ||
|
|
fe89ae13af | ||
|
|
6b90cd1277 | ||
|
|
ce64ec5397 | ||
|
|
bf6ca2dc78 | ||
|
|
204c68d475 | ||
|
|
5a7e59d833 | ||
|
|
0336a982ff | ||
|
|
aa18b63c16 | ||
|
|
3f890551e7 | ||
|
|
823127c87e | ||
|
|
cf2c9c6dc7 | ||
|
|
9b63b90ec4 | ||
|
|
a0ba140895 | ||
|
|
588f2502ec | ||
|
|
ae7d4592e1 | ||
|
|
24c7d145ea | ||
|
|
f1e7d68415 | ||
|
|
91f1528149 | ||
|
|
4f19f89d4c | ||
|
|
60b8bccd37 | ||
|
|
674dcba53c | ||
|
|
3dec9e531f | ||
|
|
980197cb05 | ||
|
|
5d30c71ccf | ||
|
|
1dcc5ca9f3 | ||
|
|
1eb24981c6 | ||
|
|
cb6b8ea5ac | ||
|
|
546a662a30 | ||
|
|
ef84c4dfad | ||
|
|
8ca81d0991 | ||
|
|
02e8158918 | ||
|
|
37cbe6c488 | ||
|
|
58d763f971 | ||
|
|
3d2700d29d | ||
|
|
e14ea94b0d | ||
|
|
17fde3df0c | ||
|
|
162204f28f | ||
|
|
491fb14eaa | ||
|
|
f4d7fe8850 | ||
|
|
4af583e5d5 | ||
|
|
282887368a | ||
|
|
94e372d8f2 | ||
|
|
3cb67939e4 | ||
|
|
f3197d2618 | ||
|
|
3785f7621c | ||
|
|
2b59badde7 | ||
|
|
54162b43c8 | ||
|
|
1933cdc28c | ||
|
|
f2512d1ff1 | ||
|
|
d54b13e80a | ||
|
|
8ed5e81bdb | ||
|
|
f0fc83372b | ||
|
|
081504edab | ||
|
|
b939123e84 | ||
|
|
f008d9dd19 | ||
|
|
d2386af523 | ||
|
|
41b9024e28 | ||
|
|
50c17bd5e4 | ||
|
|
1882c43389 | ||
|
|
d7027887cc | ||
|
|
7d8c9df252 | ||
|
|
e1e465dc51 | ||
|
|
b276d48ecf | ||
|
|
0c13734f7a | ||
|
|
de788266eb | ||
|
|
eb879a743e | ||
|
|
231bceeabb | ||
|
|
387b7602cf | ||
|
|
d8c14c04e3 | ||
|
|
33f981d8f1 | ||
|
|
6188b89ff0 | ||
|
|
2d424e078e | ||
|
|
ee5d72301a | ||
|
|
ddb02adbb4 | ||
|
|
b6b05f79a1 | ||
|
|
4a438e4799 | ||
|
|
31dc69da42 | ||
|
|
35dfd13ddd | ||
|
|
b4535bd29b | ||
|
|
5a30ec1806 | ||
|
|
d7bb80468b | ||
|
|
92f6d31f33 | ||
|
|
421bc93765 | ||
|
|
9d6a692054 | ||
|
|
278c7bfc53 | ||
|
|
ad23c0e03e | ||
|
|
ca8274dbe8 | ||
|
|
1234fbf5f4 | ||
|
|
e64ed4c27f | ||
|
|
a4b7236289 | ||
|
|
16c4374f7a | ||
|
|
05a77c7406 | ||
|
|
fceead7cbe | ||
|
|
3d81bdd281 | ||
|
|
56ab8de968 | ||
|
|
30b150dbfc | ||
|
|
40ee3b1b45 | ||
|
|
c79217dd75 | ||
|
|
075d4d4210 | ||
|
|
216b679e4b | ||
|
|
c5fe81f4e6 | ||
|
|
0c464d0220 | ||
|
|
02f28d12e3 | ||
|
|
13d24278f2 | ||
|
|
42ef4352f4 | ||
|
|
843720a671 | ||
|
|
0884dd88d6 | ||
|
|
4262fa8637 | ||
|
|
29a2db6552 | ||
|
|
06fa0c17a4 | ||
|
|
cfdca6a894 | ||
|
|
2873f6c193 | ||
|
|
8b963ed63c | ||
|
|
adb951426a | ||
|
|
9ced96a1c9 | ||
|
|
7e155dc87b | ||
|
|
c5e2d80fc0 | ||
|
|
b0fa646de9 | ||
|
|
83f08cffee | ||
|
|
03652a0030 | ||
|
|
1c3e0ba656 | ||
|
|
48f80b947b | ||
|
|
d7873de4e8 | ||
|
|
32d025bcf2 | ||
|
|
61ab5d1652 | ||
|
|
f5fd6833e2 | ||
|
|
163e6f56df | ||
|
|
5650697475 | ||
|
|
2968087d37 | ||
|
|
e7ec80f58a | ||
|
|
f0ba699463 | ||
|
|
06d5b14b86 | ||
|
|
dff544cd5d | ||
|
|
73bc0f6258 | ||
|
|
7e5e180000 | ||
|
|
fc431df2b4 | ||
|
|
bb61be630a | ||
|
|
cdc9ec2854 | ||
|
|
21d3703b69 | ||
|
|
c395be252e | ||
|
|
293c350fb7 | ||
|
|
a777f336e1 | ||
|
|
5b6c186125 | ||
|
|
6451d59deb | ||
|
|
c912b66a8f | ||
|
|
d62c43bc95 | ||
|
|
3b7b9b6ed1 | ||
|
|
9822a6ed5d | ||
|
|
a06f4dfad6 | ||
|
|
b92df87400 | ||
|
|
d7921c0111 | ||
|
|
af09c3e62a | ||
|
|
0ed42f657d | ||
|
|
ed7fbabd1c | ||
|
|
bd03563fcb | ||
|
|
05ffc7f8d6 | ||
|
|
ebc475d278 | ||
|
|
2813437515 | ||
|
|
ea2e885505 | ||
|
|
a7fadc3a45 | ||
|
|
8040a20f71 | ||
|
|
0e87854819 | ||
|
|
3bc6c641de | ||
|
|
4abb6e17ba | ||
|
|
1986f08cf9 | ||
|
|
620ae5cf1d | ||
|
|
590ee5a248 | ||
|
|
a08326ab60 | ||
|
|
63cf99361d | ||
|
|
1e54ca82b8 | ||
|
|
2ec576e110 | ||
|
|
4251e976b3 | ||
|
|
d831e2f3a4 | ||
|
|
21f20417d6 | ||
|
|
e1c914d9bb | ||
|
|
4b03b0a93a | ||
|
|
bbcde55a9e | ||
|
|
0cba898280 | ||
|
|
b9edec069a | ||
|
|
939cb7958a | ||
|
|
ebb38c6518 | ||
|
|
fa80d2f3cc | ||
|
|
869f37cd89 | ||
|
|
c22202585d | ||
|
|
f28c912d5a | ||
|
|
a0e56c5282 | ||
|
|
de7da1e806 | ||
|
|
add2f6f669 | ||
|
|
b06e765e68 | ||
|
|
c3952cb985 | ||
|
|
d5469a64d2 | ||
|
|
ac26fc6d5f | ||
|
|
122088712d | ||
|
|
9fb09ce14d | ||
|
|
392fb21946 | ||
|
|
e94b05851f | ||
|
|
571a5962b7 | ||
|
|
8b6863dc40 | ||
|
|
01af629399 | ||
|
|
4ece6d2a9b | ||
|
|
18d1d7af33 | ||
|
|
5ada250a66 | ||
|
|
308c7ab670 | ||
|
|
89d35e020a | ||
|
|
6729570799 | ||
|
|
f72f5f6438 | ||
|
|
a02e11e0bc | ||
|
|
9ff15e1506 | ||
|
|
fdddfc6b1f | ||
|
|
78ebb6295d | ||
|
|
73c89e8c00 | ||
|
|
fcc499e401 | ||
|
|
6e8efe22aa | ||
|
|
f9c5cb73a2 | ||
|
|
c939521f5f | ||
|
|
ee837889db | ||
|
|
a5f4cba72f |
53
.github/workflows/docker-build-test.yml
vendored
53
.github/workflows/docker-build-test.yml
vendored
@@ -4,23 +4,25 @@ on:
|
|||||||
push:
|
push:
|
||||||
paths-ignore:
|
paths-ignore:
|
||||||
- 'aider/website/**'
|
- 'aider/website/**'
|
||||||
- README.md
|
- 'README.md'
|
||||||
- HISTORY.md
|
- 'HISTORY.md'
|
||||||
|
- '.github/workflows/*'
|
||||||
|
- '!.github/workflows/docker-build-test.yml'
|
||||||
branches:
|
branches:
|
||||||
- main
|
- main
|
||||||
pull_request:
|
pull_request:
|
||||||
paths-ignore:
|
paths-ignore:
|
||||||
- 'aider/website/**'
|
- 'aider/website/**'
|
||||||
- README.md
|
- 'README.md'
|
||||||
|
- 'HISTORY.md'
|
||||||
|
- '.github/workflows/*'
|
||||||
|
- '!.github/workflows/docker-build-test.yml'
|
||||||
branches:
|
branches:
|
||||||
- main
|
- main
|
||||||
|
|
||||||
# copy most of these steps from release.yml, but push: false and no tags:
|
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
build:
|
docker_build_and_push:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout code
|
- name: Checkout code
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v4
|
||||||
@@ -29,11 +31,19 @@ jobs:
|
|||||||
|
|
||||||
- name: Set up QEMU
|
- name: Set up QEMU
|
||||||
uses: docker/setup-qemu-action@v3
|
uses: docker/setup-qemu-action@v3
|
||||||
|
|
||||||
- name: Set up Docker Buildx
|
- name: Set up Docker Buildx
|
||||||
uses: docker/setup-buildx-action@v3
|
uses: docker/setup-buildx-action@v3
|
||||||
|
|
||||||
- name: Build Docker standard image
|
- name: Login to DockerHub
|
||||||
|
if: ${{ github.event_name != 'pull_request' }}
|
||||||
|
uses: docker/login-action@v3
|
||||||
|
with:
|
||||||
|
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||||
|
password: ${{ secrets.DOCKERHUB_PASSWORD }}
|
||||||
|
|
||||||
|
- name: Build Docker images (PR)
|
||||||
|
if: ${{ github.event_name == 'pull_request' }}
|
||||||
uses: docker/build-push-action@v5
|
uses: docker/build-push-action@v5
|
||||||
with:
|
with:
|
||||||
context: .
|
context: .
|
||||||
@@ -42,7 +52,19 @@ jobs:
|
|||||||
push: false
|
push: false
|
||||||
target: aider
|
target: aider
|
||||||
|
|
||||||
- name: Build Docker full image
|
- name: Build Docker images (Push)
|
||||||
|
if: ${{ github.event_name != 'pull_request' }}
|
||||||
|
uses: docker/build-push-action@v5
|
||||||
|
with:
|
||||||
|
context: .
|
||||||
|
file: ./docker/Dockerfile
|
||||||
|
platforms: linux/amd64,linux/arm64
|
||||||
|
push: true
|
||||||
|
tags: ${{ secrets.DOCKERHUB_USERNAME }}/aider:dev
|
||||||
|
target: aider
|
||||||
|
|
||||||
|
- name: Build Docker full image (PR)
|
||||||
|
if: ${{ github.event_name == 'pull_request' }}
|
||||||
uses: docker/build-push-action@v5
|
uses: docker/build-push-action@v5
|
||||||
with:
|
with:
|
||||||
context: .
|
context: .
|
||||||
@@ -50,3 +72,14 @@ jobs:
|
|||||||
platforms: linux/amd64,linux/arm64
|
platforms: linux/amd64,linux/arm64
|
||||||
push: false
|
push: false
|
||||||
target: aider-full
|
target: aider-full
|
||||||
|
|
||||||
|
- name: Build Docker full image (Push)
|
||||||
|
if: ${{ github.event_name != 'pull_request' }}
|
||||||
|
uses: docker/build-push-action@v5
|
||||||
|
with:
|
||||||
|
context: .
|
||||||
|
file: ./docker/Dockerfile
|
||||||
|
platforms: linux/amd64,linux/arm64
|
||||||
|
push: true
|
||||||
|
tags: ${{ secrets.DOCKERHUB_USERNAME }}/aider-full:dev
|
||||||
|
target: aider-full
|
||||||
|
|||||||
8
.github/workflows/pages.yml
vendored
8
.github/workflows/pages.yml
vendored
@@ -12,6 +12,7 @@ on:
|
|||||||
- "main"
|
- "main"
|
||||||
paths:
|
paths:
|
||||||
- "aider/website/**"
|
- "aider/website/**"
|
||||||
|
- ".github/workflows/pages.yml"
|
||||||
|
|
||||||
# Allows you to run this workflow manually from the Actions tab
|
# Allows you to run this workflow manually from the Actions tab
|
||||||
workflow_dispatch:
|
workflow_dispatch:
|
||||||
@@ -55,10 +56,9 @@ jobs:
|
|||||||
env:
|
env:
|
||||||
JEKYLL_ENV: production
|
JEKYLL_ENV: production
|
||||||
- name: Upload artifact
|
- name: Upload artifact
|
||||||
# Automatically uploads an artifact from the './_site' directory by default
|
uses: actions/upload-pages-artifact@v3
|
||||||
uses: actions/upload-pages-artifact@v1
|
|
||||||
with:
|
with:
|
||||||
path: "aider/website/_site/"
|
path: "aider/website/_site"
|
||||||
|
|
||||||
# Deployment job
|
# Deployment job
|
||||||
deploy:
|
deploy:
|
||||||
@@ -70,7 +70,7 @@ jobs:
|
|||||||
steps:
|
steps:
|
||||||
- name: Deploy to GitHub Pages
|
- name: Deploy to GitHub Pages
|
||||||
id: deployment
|
id: deployment
|
||||||
uses: actions/deploy-pages@v2
|
uses: actions/deploy-pages@v4
|
||||||
|
|
||||||
- name: Set up Python 3.12
|
- name: Set up Python 3.12
|
||||||
uses: actions/setup-python@v5
|
uses: actions/setup-python@v5
|
||||||
|
|||||||
11
.github/workflows/ubuntu-tests.yml
vendored
11
.github/workflows/ubuntu-tests.yml
vendored
@@ -4,14 +4,19 @@ on:
|
|||||||
push:
|
push:
|
||||||
paths-ignore:
|
paths-ignore:
|
||||||
- 'aider/website/**'
|
- 'aider/website/**'
|
||||||
- README.md
|
- 'README.md'
|
||||||
- HISTORY.md
|
- 'HISTORY.md'
|
||||||
|
- '.github/workflows/*'
|
||||||
|
- '!.github/workflows/ubuntu-tests.yml'
|
||||||
branches:
|
branches:
|
||||||
- main
|
- main
|
||||||
pull_request:
|
pull_request:
|
||||||
paths-ignore:
|
paths-ignore:
|
||||||
- 'aider/website/**'
|
- 'aider/website/**'
|
||||||
- README.md
|
- 'README.md'
|
||||||
|
- 'HISTORY.md'
|
||||||
|
- '.github/workflows/*'
|
||||||
|
- '!.github/workflows/ubuntu-tests.yml'
|
||||||
branches:
|
branches:
|
||||||
- main
|
- main
|
||||||
|
|
||||||
|
|||||||
11
.github/workflows/windows-tests.yml
vendored
11
.github/workflows/windows-tests.yml
vendored
@@ -4,14 +4,19 @@ on:
|
|||||||
push:
|
push:
|
||||||
paths-ignore:
|
paths-ignore:
|
||||||
- 'aider/website/**'
|
- 'aider/website/**'
|
||||||
- README.md
|
- 'README.md'
|
||||||
- HISTORY.md
|
- 'HISTORY.md'
|
||||||
|
- '.github/workflows/*'
|
||||||
|
- '!.github/workflows/windows-tests.yml'
|
||||||
branches:
|
branches:
|
||||||
- main
|
- main
|
||||||
pull_request:
|
pull_request:
|
||||||
paths-ignore:
|
paths-ignore:
|
||||||
- 'aider/website/**'
|
- 'aider/website/**'
|
||||||
- README.md
|
- 'README.md'
|
||||||
|
- 'HISTORY.md'
|
||||||
|
- '.github/workflows/*'
|
||||||
|
- '!.github/workflows/windows-tests.yml'
|
||||||
branches:
|
branches:
|
||||||
- main
|
- main
|
||||||
|
|
||||||
|
|||||||
134
HISTORY.md
134
HISTORY.md
@@ -1,5 +1,132 @@
|
|||||||
# Release history
|
# Release history
|
||||||
|
|
||||||
|
### main branch
|
||||||
|
|
||||||
|
- Improved support for thinking/reasoningmodels:
|
||||||
|
- Added `--thinking-tokens` CLI option to control token budget for models that support thinking.
|
||||||
|
- Display thinking/reasoning content from LLMs which return it.
|
||||||
|
- Enhanced handling of reasoning tags to better clean up model responses.
|
||||||
|
- Added deprecation warning for `remove_reasoning` setting, now replaced by `reasoning_tag`.
|
||||||
|
- Aider will notify you when it's completed the last request and needs your input:
|
||||||
|
- Added [notifications when LLM responses are ready](https://aider.chat/docs/usage/notifications.html) with `--notifications` flag.
|
||||||
|
- Specify desktop notification command with `--notifications-command`.
|
||||||
|
- Added support for QWQ 32B.
|
||||||
|
- Switch to `tree-sitter-language-pack` for tree sitter support.
|
||||||
|
- Improved error handling for EOF (Ctrl+D) in user input prompts.
|
||||||
|
- Added helper function to ensure hex color values have a # prefix.
|
||||||
|
- Fixed handling of Git errors when reading staged files.
|
||||||
|
- Improved SSL verification control for model information requests.
|
||||||
|
- Improved empty LLM response handling with clearer warning messages.
|
||||||
|
- Fixed Git identity retrieval to respect global configuration, by Akira Komamura.
|
||||||
|
- Offer to install dependencies for Bedrock and Vertex AI models.
|
||||||
|
- Deprecated model shortcut args (like --4o, --opus) in favor of the --model flag.
|
||||||
|
- Added C# language support for tree-sitter parsing.
|
||||||
|
- Improved handling of NO_COLOR environment variable for disabling colored output.
|
||||||
|
- Simplified reasoning content handling in stream processing.
|
||||||
|
- Added support for both reasoning and reasoning_content fields from different models.
|
||||||
|
- Aider wrote 85% of the code in this release.
|
||||||
|
|
||||||
|
### Aider v0.75.3
|
||||||
|
|
||||||
|
- Support for V3 free on OpenRouter: `--model openrouter/deepseek/deepseek-chat:free`.
|
||||||
|
|
||||||
|
### Aider v0.75.2
|
||||||
|
|
||||||
|
- Added support for Claude 3.7 Sonnet models on OpenRouter, Bedrock and Vertex AI.
|
||||||
|
- Updated default model to Claude 3.7 Sonnet on OpenRouter.
|
||||||
|
- Added support for GPT-4.5-preview model.
|
||||||
|
- Added support for Claude 3.7 Sonnet:beta on OpenRouter.
|
||||||
|
- Fixed weak_model_name patterns to match main model name patterns for some models.
|
||||||
|
|
||||||
|
### Aider v0.75.1
|
||||||
|
|
||||||
|
- Added support for `openrouter/anthropic/claude-3.7-sonnet`
|
||||||
|
|
||||||
|
### Aider v0.75.0
|
||||||
|
|
||||||
|
- Basic support for Claude 3.7 Sonnet
|
||||||
|
- Use `--model sonnet` to use the new 3.7
|
||||||
|
- Thinking support coming soon.
|
||||||
|
- Bugfix to `/editor` command.
|
||||||
|
- Aider wrote 46% of the code in this release.
|
||||||
|
|
||||||
|
### Aider v0.74.3
|
||||||
|
|
||||||
|
- Downgrade streamlit dependency to avoid threading bug.
|
||||||
|
- Added support for tree-sitter language pack.
|
||||||
|
- Added openrouter/o3-mini-high model configuration.
|
||||||
|
- Added build.gradle.kts to special files for Kotlin project support, by Lucas Shadler.
|
||||||
|
|
||||||
|
### Aider v0.74.2
|
||||||
|
|
||||||
|
- Prevent more than one cache warming thread from becoming active.
|
||||||
|
- Fixed continuation prompt ". " for multiline input.
|
||||||
|
- Added HCL (Terraform) syntax support, by Warren Krewenki.
|
||||||
|
|
||||||
|
### Aider v0.74.1
|
||||||
|
|
||||||
|
- Have o1 & o3-mini generate markdown by sending the magic "Formatting re-enabled." string.
|
||||||
|
- Bugfix for multi-line inputs, which should not include the ". " continuation prompt.
|
||||||
|
|
||||||
|
### Aider v0.74.0
|
||||||
|
|
||||||
|
- Dynamically changes the Ollama context window to hold the current chat.
|
||||||
|
- Better support for o3-mini, DeepSeek V3 & R1, o1-mini, o1 especially via third-party API providers.
|
||||||
|
- Remove `<think>` tags from R1 responses for commit messages (and other weak model uses).
|
||||||
|
- Can now specify `use_temperature: <float>` in model settings, not just true/false.
|
||||||
|
- The full docker container now includes `boto3` for Bedrock.
|
||||||
|
- Docker containers now set `HOME=/app` which is the normal project mount-point, to persist `~/.aider`.
|
||||||
|
- Bugfix to prevent creating incorrect filenames like `python`, `php`, etc.
|
||||||
|
- Bugfix for `--timeout`
|
||||||
|
- Bugfix so that `/model` now correctly reports that the weak model is not changed.
|
||||||
|
- Bugfix so that multi-line mode persists through ^C at confirmation prompts.
|
||||||
|
- Watch files now fully ignores top-level directories named in ignore files, to reduce the chance of hitting OS watch limits. Helpful to ignore giant subtrees like `node_modules`.
|
||||||
|
- Fast startup with more providers and when model metadata provided in local files.
|
||||||
|
- Improved .gitignore handling:
|
||||||
|
- Honor ignores already in effect regardless of how they've been configured.
|
||||||
|
- Check for .env only when the file exists.
|
||||||
|
- Yes/No prompts now accept All/Skip as alias for Y/N even when not processing a group of confirmations.
|
||||||
|
- Aider wrote 77% of the code in this release.
|
||||||
|
|
||||||
|
### Aider v0.73.0
|
||||||
|
|
||||||
|
- Full support for o3-mini: `aider --model o3-mini`
|
||||||
|
- New `--reasoning-effort` argument: low, medium, high.
|
||||||
|
- Improved handling of context window size limits, with better messaging and Ollama-specific guidance.
|
||||||
|
- Added support for removing model-specific reasoning tags from responses with `remove_reasoning: tagname` model setting.
|
||||||
|
- Auto-create parent directories when creating new files, by xqyz.
|
||||||
|
- Support for R1 free on OpenRouter: `--model openrouter/deepseek/deepseek-r1:free`
|
||||||
|
- Aider wrote 69% of the code in this release.
|
||||||
|
|
||||||
|
### Aider v0.72.3
|
||||||
|
|
||||||
|
- Enforce user/assistant turn order to avoid R1 errors, by miradnanali.
|
||||||
|
- Case-insensitive model name matching while preserving original case.
|
||||||
|
|
||||||
|
### Aider v0.72.2
|
||||||
|
- Harden against user/assistant turn order problems which cause R1 errors.
|
||||||
|
|
||||||
|
### Aider v0.72.1
|
||||||
|
- Fix model metadata for `openrouter/deepseek/deepseek-r1`
|
||||||
|
|
||||||
|
### Aider v0.72.0
|
||||||
|
- Support for DeepSeek R1.
|
||||||
|
- Use shortcut: `--model r1`
|
||||||
|
- Also via OpenRouter: `--model openrouter/deepseek/deepseek-r1`
|
||||||
|
- Added Kotlin syntax support to repo map, by Paul Walker.
|
||||||
|
- Added `--line-endings` for file writing, by Titusz Pan.
|
||||||
|
- Added examples_as_sys_msg=True for GPT-4o models, improves benchmark scores.
|
||||||
|
- Bumped all dependencies, to pick up litellm support for o1 system messages.
|
||||||
|
- Bugfix for turn taking when reflecting lint/test errors.
|
||||||
|
- Aider wrote 52% of the code in this release.
|
||||||
|
|
||||||
|
### Aider v0.71.1
|
||||||
|
|
||||||
|
- Fix permissions issue in Docker images.
|
||||||
|
- Added read-only file announcements.
|
||||||
|
- Bugfix: ASCII fallback for unicode errors.
|
||||||
|
- Bugfix: integer indices for list slicing in repomap calculations.
|
||||||
|
|
||||||
### Aider v0.71.0
|
### Aider v0.71.0
|
||||||
|
|
||||||
- Prompts to help DeepSeek work better when alternating between `/ask` and `/code`.
|
- Prompts to help DeepSeek work better when alternating between `/ask` and `/code`.
|
||||||
@@ -13,16 +140,13 @@
|
|||||||
- Turn off fancy input and watch files if terminal is dumb.
|
- Turn off fancy input and watch files if terminal is dumb.
|
||||||
- Added support for custom voice format and input device settings.
|
- Added support for custom voice format and input device settings.
|
||||||
- Disabled Streamlit email prompt, by apaz-cli.
|
- Disabled Streamlit email prompt, by apaz-cli.
|
||||||
|
- Docker container runs as non-root user.
|
||||||
- Fixed lint command handling of nested spaced strings, by Aaron Weisberg.
|
- Fixed lint command handling of nested spaced strings, by Aaron Weisberg.
|
||||||
- Added token count feedback when adding command output to chat.
|
- Added token count feedback when adding command output to chat.
|
||||||
- Improved error handling for large audio files with automatic format conversion.
|
- Improved error handling for large audio files with automatic format conversion.
|
||||||
- Improved handling of git repo index errors, by Krazer.
|
- Improved handling of git repo index errors, by Krazer.
|
||||||
- Improved unicode handling in console output with ASCII fallback.
|
- Improved unicode handling in console output with ASCII fallback.
|
||||||
- Added AssertionError to git error handling.
|
- Added AssertionError, AttributeError to git error handling.
|
||||||
- Fixed file export path in voice format conversion.
|
|
||||||
- Added AttributeError to git error handling.
|
|
||||||
- Improved markdown rendering performance with adaptive delay based on render time.
|
|
||||||
- Fixed typo in model metadata variable name.
|
|
||||||
- Aider wrote 60% of the code in this release.
|
- Aider wrote 60% of the code in this release.
|
||||||
|
|
||||||
### Aider v0.70.0
|
### Aider v0.70.0
|
||||||
|
|||||||
24
README.md
24
README.md
@@ -6,8 +6,7 @@
|
|||||||
Aider lets you pair program with LLMs,
|
Aider lets you pair program with LLMs,
|
||||||
to edit code in your local git repository.
|
to edit code in your local git repository.
|
||||||
Start a new project or work with an existing code base.
|
Start a new project or work with an existing code base.
|
||||||
Aider works best with Claude 3.5 Sonnet, DeepSeek V3, o1 & GPT-4o and can [connect to almost any LLM](https://aider.chat/docs/llms.html).
|
Aider works best with Claude 3.7 Sonnet, DeepSeek R1 & Chat V3, OpenAI o1, o3-mini & GPT-4o. Aider can [connect to almost any LLM, including local models](https://aider.chat/docs/llms.html).
|
||||||
|
|
||||||
|
|
||||||
<!-- SCREENCAST START -->
|
<!-- SCREENCAST START -->
|
||||||
<p align="center">
|
<p align="center">
|
||||||
@@ -52,11 +51,20 @@ aider-install
|
|||||||
# Change directory into your code base
|
# Change directory into your code base
|
||||||
cd /to/your/project
|
cd /to/your/project
|
||||||
|
|
||||||
# Work with Claude 3.5 Sonnet on your code
|
# Work with DeepSeek via DeepSeek's API
|
||||||
aider --model sonnet --anthropic-api-key your-key-goes-here
|
aider --model deepseek --api-key deepseek=your-key-goes-here
|
||||||
|
|
||||||
# Work with GPT-4o on your code
|
# Work with Claude 3.7 Sonnet via Anthropic's API
|
||||||
aider --model gpt-4o --openai-api-key your-key-goes-here
|
aider --model sonnet --api-key anthropic=your-key-goes-here
|
||||||
|
|
||||||
|
# Work with GPT-4o via OpenAI's API
|
||||||
|
aider --model gpt-4o --api-key openai=your-key-goes-here
|
||||||
|
|
||||||
|
# Work with Sonnet via OpenRouter's API
|
||||||
|
aider --model openrouter/anthropic/claude-3.7-sonnet --api-key openrouter=your-key-goes-here
|
||||||
|
|
||||||
|
# Work with DeepSeek via OpenRouter's API
|
||||||
|
aider --model openrouter/deepseek/deepseek-chat --api-key openrouter=your-key-goes-here
|
||||||
```
|
```
|
||||||
<!--[[[end]]]-->
|
<!--[[[end]]]-->
|
||||||
|
|
||||||
@@ -72,7 +80,7 @@ for more details.
|
|||||||
- Ask for changes:
|
- Ask for changes:
|
||||||
- Add new features or test cases.
|
- Add new features or test cases.
|
||||||
- Describe a bug.
|
- Describe a bug.
|
||||||
- Paste in an error message or or GitHub issue URL.
|
- Paste in an error message or GitHub issue URL.
|
||||||
- Refactor code.
|
- Refactor code.
|
||||||
- Update docs.
|
- Update docs.
|
||||||
- Aider will edit your files to complete your request.
|
- Aider will edit your files to complete your request.
|
||||||
@@ -87,7 +95,7 @@ Pair program with AI.
|
|||||||
- [Add images to the chat](https://aider.chat/docs/usage/images-urls.html) (GPT-4o, Claude 3.5 Sonnet, etc).
|
- [Add images to the chat](https://aider.chat/docs/usage/images-urls.html) (GPT-4o, Claude 3.5 Sonnet, etc).
|
||||||
- [Add URLs to the chat](https://aider.chat/docs/usage/images-urls.html) and aider will read their content.
|
- [Add URLs to the chat](https://aider.chat/docs/usage/images-urls.html) and aider will read their content.
|
||||||
- [Code with your voice](https://aider.chat/docs/usage/voice.html).
|
- [Code with your voice](https://aider.chat/docs/usage/voice.html).
|
||||||
- Aider works best with Claude 3.5 Sonnet, DeepSeek V3, o1 & GPT-4o and can [connect to almost any LLM](https://aider.chat/docs/llms.html).
|
- Aider works best with Claude 3.7 Sonnet, DeepSeek V3, o1 & GPT-4o and can [connect to almost any LLM](https://aider.chat/docs/llms.html).
|
||||||
|
|
||||||
|
|
||||||
## Top tier performance
|
## Top tier performance
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
from packaging import version
|
from packaging import version
|
||||||
|
|
||||||
__version__ = "0.71.1.dev"
|
__version__ = "0.76.1.dev"
|
||||||
safe_version = __version__
|
safe_version = __version__
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
|||||||
134
aider/args.py
134
aider/args.py
@@ -12,6 +12,7 @@ from aider.args_formatter import (
|
|||||||
MarkdownHelpFormatter,
|
MarkdownHelpFormatter,
|
||||||
YamlHelpFormatter,
|
YamlHelpFormatter,
|
||||||
)
|
)
|
||||||
|
from aider.deprecated import add_deprecated_model_args
|
||||||
|
|
||||||
from .dump import dump # noqa: F401
|
from .dump import dump # noqa: F401
|
||||||
|
|
||||||
@@ -38,98 +39,6 @@ def get_parser(default_config_files, git_root):
|
|||||||
default=None,
|
default=None,
|
||||||
help="Specify the model to use for the main chat",
|
help="Specify the model to use for the main chat",
|
||||||
)
|
)
|
||||||
opus_model = "claude-3-opus-20240229"
|
|
||||||
group.add_argument(
|
|
||||||
"--opus",
|
|
||||||
action="store_const",
|
|
||||||
dest="model",
|
|
||||||
const=opus_model,
|
|
||||||
help=f"Use {opus_model} model for the main chat",
|
|
||||||
)
|
|
||||||
sonnet_model = "claude-3-5-sonnet-20241022"
|
|
||||||
group.add_argument(
|
|
||||||
"--sonnet",
|
|
||||||
action="store_const",
|
|
||||||
dest="model",
|
|
||||||
const=sonnet_model,
|
|
||||||
help=f"Use {sonnet_model} model for the main chat",
|
|
||||||
)
|
|
||||||
haiku_model = "claude-3-5-haiku-20241022"
|
|
||||||
group.add_argument(
|
|
||||||
"--haiku",
|
|
||||||
action="store_const",
|
|
||||||
dest="model",
|
|
||||||
const=haiku_model,
|
|
||||||
help=f"Use {haiku_model} model for the main chat",
|
|
||||||
)
|
|
||||||
gpt_4_model = "gpt-4-0613"
|
|
||||||
group.add_argument(
|
|
||||||
"--4",
|
|
||||||
"-4",
|
|
||||||
action="store_const",
|
|
||||||
dest="model",
|
|
||||||
const=gpt_4_model,
|
|
||||||
help=f"Use {gpt_4_model} model for the main chat",
|
|
||||||
)
|
|
||||||
gpt_4o_model = "gpt-4o"
|
|
||||||
group.add_argument(
|
|
||||||
"--4o",
|
|
||||||
action="store_const",
|
|
||||||
dest="model",
|
|
||||||
const=gpt_4o_model,
|
|
||||||
help=f"Use {gpt_4o_model} model for the main chat",
|
|
||||||
)
|
|
||||||
gpt_4o_mini_model = "gpt-4o-mini"
|
|
||||||
group.add_argument(
|
|
||||||
"--mini",
|
|
||||||
action="store_const",
|
|
||||||
dest="model",
|
|
||||||
const=gpt_4o_mini_model,
|
|
||||||
help=f"Use {gpt_4o_mini_model} model for the main chat",
|
|
||||||
)
|
|
||||||
gpt_4_turbo_model = "gpt-4-1106-preview"
|
|
||||||
group.add_argument(
|
|
||||||
"--4-turbo",
|
|
||||||
action="store_const",
|
|
||||||
dest="model",
|
|
||||||
const=gpt_4_turbo_model,
|
|
||||||
help=f"Use {gpt_4_turbo_model} model for the main chat",
|
|
||||||
)
|
|
||||||
gpt_3_model_name = "gpt-3.5-turbo"
|
|
||||||
group.add_argument(
|
|
||||||
"--35turbo",
|
|
||||||
"--35-turbo",
|
|
||||||
"--3",
|
|
||||||
"-3",
|
|
||||||
action="store_const",
|
|
||||||
dest="model",
|
|
||||||
const=gpt_3_model_name,
|
|
||||||
help=f"Use {gpt_3_model_name} model for the main chat",
|
|
||||||
)
|
|
||||||
deepseek_model = "deepseek/deepseek-chat"
|
|
||||||
group.add_argument(
|
|
||||||
"--deepseek",
|
|
||||||
action="store_const",
|
|
||||||
dest="model",
|
|
||||||
const=deepseek_model,
|
|
||||||
help=f"Use {deepseek_model} model for the main chat",
|
|
||||||
)
|
|
||||||
o1_mini_model = "o1-mini"
|
|
||||||
group.add_argument(
|
|
||||||
"--o1-mini",
|
|
||||||
action="store_const",
|
|
||||||
dest="model",
|
|
||||||
const=o1_mini_model,
|
|
||||||
help=f"Use {o1_mini_model} model for the main chat",
|
|
||||||
)
|
|
||||||
o1_preview_model = "o1-preview"
|
|
||||||
group.add_argument(
|
|
||||||
"--o1-preview",
|
|
||||||
action="store_const",
|
|
||||||
dest="model",
|
|
||||||
const=o1_preview_model,
|
|
||||||
help=f"Use {o1_preview_model} model for the main chat",
|
|
||||||
)
|
|
||||||
|
|
||||||
##########
|
##########
|
||||||
group = parser.add_argument_group("API Keys and settings")
|
group = parser.add_argument_group("API Keys and settings")
|
||||||
@@ -203,6 +112,16 @@ def get_parser(default_config_files, git_root):
|
|||||||
metavar="ALIAS:MODEL",
|
metavar="ALIAS:MODEL",
|
||||||
help="Add a model alias (can be used multiple times)",
|
help="Add a model alias (can be used multiple times)",
|
||||||
)
|
)
|
||||||
|
group.add_argument(
|
||||||
|
"--reasoning-effort",
|
||||||
|
type=str,
|
||||||
|
help="Set the reasoning_effort API parameter (default: not set)",
|
||||||
|
)
|
||||||
|
group.add_argument(
|
||||||
|
"--thinking-tokens",
|
||||||
|
type=int,
|
||||||
|
help="Set the thinking token budget for models that support it (default: not set)",
|
||||||
|
)
|
||||||
group.add_argument(
|
group.add_argument(
|
||||||
"--verify-ssl",
|
"--verify-ssl",
|
||||||
action=argparse.BooleanOptionalAction,
|
action=argparse.BooleanOptionalAction,
|
||||||
@@ -211,7 +130,7 @@ def get_parser(default_config_files, git_root):
|
|||||||
)
|
)
|
||||||
group.add_argument(
|
group.add_argument(
|
||||||
"--timeout",
|
"--timeout",
|
||||||
type=int,
|
type=float,
|
||||||
default=None,
|
default=None,
|
||||||
help="Timeout in seconds for API calls (default: None)",
|
help="Timeout in seconds for API calls (default: None)",
|
||||||
)
|
)
|
||||||
@@ -766,6 +685,12 @@ def get_parser(default_config_files, git_root):
|
|||||||
default="utf-8",
|
default="utf-8",
|
||||||
help="Specify the encoding for input and output (default: utf-8)",
|
help="Specify the encoding for input and output (default: utf-8)",
|
||||||
)
|
)
|
||||||
|
group.add_argument(
|
||||||
|
"--line-endings",
|
||||||
|
choices=["platform", "lf", "crlf"],
|
||||||
|
default="platform",
|
||||||
|
help="Line endings to use when writing files (default: platform)",
|
||||||
|
)
|
||||||
group.add_argument(
|
group.add_argument(
|
||||||
"-c",
|
"-c",
|
||||||
"--config",
|
"--config",
|
||||||
@@ -802,6 +727,24 @@ def get_parser(default_config_files, git_root):
|
|||||||
default=False,
|
default=False,
|
||||||
help="Enable/disable multi-line input mode with Meta-Enter to submit (default: False)",
|
help="Enable/disable multi-line input mode with Meta-Enter to submit (default: False)",
|
||||||
)
|
)
|
||||||
|
group.add_argument(
|
||||||
|
"--notifications",
|
||||||
|
action=argparse.BooleanOptionalAction,
|
||||||
|
default=False,
|
||||||
|
help=(
|
||||||
|
"Enable/disable terminal bell notifications when LLM responses are ready (default:"
|
||||||
|
" False)"
|
||||||
|
),
|
||||||
|
)
|
||||||
|
group.add_argument(
|
||||||
|
"--notifications-command",
|
||||||
|
metavar="COMMAND",
|
||||||
|
default=None,
|
||||||
|
help=(
|
||||||
|
"Specify a command to run for notifications instead of the terminal bell. If not"
|
||||||
|
" specified, a default command for your OS may be used."
|
||||||
|
),
|
||||||
|
)
|
||||||
group.add_argument(
|
group.add_argument(
|
||||||
"--detect-urls",
|
"--detect-urls",
|
||||||
action=argparse.BooleanOptionalAction,
|
action=argparse.BooleanOptionalAction,
|
||||||
@@ -813,6 +756,11 @@ def get_parser(default_config_files, git_root):
|
|||||||
help="Specify which editor to use for the /editor command",
|
help="Specify which editor to use for the /editor command",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
##########
|
||||||
|
group = parser.add_argument_group("Deprecated model settings")
|
||||||
|
# Add deprecated model shortcut arguments
|
||||||
|
add_deprecated_model_args(parser, group)
|
||||||
|
|
||||||
return parser
|
return parser
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -148,11 +148,14 @@ class YamlHelpFormatter(argparse.HelpFormatter):
|
|||||||
parts.append(f"#{switch}: xxx")
|
parts.append(f"#{switch}: xxx")
|
||||||
parts.append("## Specify multiple values like this:")
|
parts.append("## Specify multiple values like this:")
|
||||||
parts.append(f"#{switch}:")
|
parts.append(f"#{switch}:")
|
||||||
parts.append(f"# - xxx")
|
parts.append("# - xxx")
|
||||||
parts.append(f"# - yyy")
|
parts.append("# - yyy")
|
||||||
parts.append(f"# - zzz")
|
parts.append("# - zzz")
|
||||||
else:
|
else:
|
||||||
parts.append(f"#{switch}: xxx\n")
|
if switch.endswith("color"):
|
||||||
|
parts.append(f'#{switch}: "xxx"\n')
|
||||||
|
else:
|
||||||
|
parts.append(f"#{switch}: xxx\n")
|
||||||
|
|
||||||
###
|
###
|
||||||
# parts.append(str(action))
|
# parts.append(str(action))
|
||||||
|
|||||||
@@ -27,10 +27,16 @@ from aider.history import ChatSummary
|
|||||||
from aider.io import ConfirmGroup, InputOutput
|
from aider.io import ConfirmGroup, InputOutput
|
||||||
from aider.linter import Linter
|
from aider.linter import Linter
|
||||||
from aider.llm import litellm
|
from aider.llm import litellm
|
||||||
|
from aider.models import RETRY_TIMEOUT
|
||||||
|
from aider.reasoning_tags import (
|
||||||
|
REASONING_TAG,
|
||||||
|
format_reasoning_content,
|
||||||
|
remove_reasoning_content,
|
||||||
|
replace_reasoning_tags,
|
||||||
|
)
|
||||||
from aider.repo import ANY_GIT_ERROR, GitRepo
|
from aider.repo import ANY_GIT_ERROR, GitRepo
|
||||||
from aider.repomap import RepoMap
|
from aider.repomap import RepoMap
|
||||||
from aider.run_cmd import run_cmd
|
from aider.run_cmd import run_cmd
|
||||||
from aider.sendchat import RETRY_TIMEOUT, send_completion
|
|
||||||
from aider.utils import format_content, format_messages, format_tokens, is_image_file
|
from aider.utils import format_content, format_messages, format_tokens, is_image_file
|
||||||
|
|
||||||
from ..dump import dump # noqa: F401
|
from ..dump import dump # noqa: F401
|
||||||
@@ -60,7 +66,7 @@ def wrap_fence(name):
|
|||||||
|
|
||||||
all_fences = [
|
all_fences = [
|
||||||
("`" * 3, "`" * 3),
|
("`" * 3, "`" * 3),
|
||||||
("`" * 4, "`" * 4),
|
("`" * 4, "`" * 4), # LLMs ignore and revert to triple-backtick, causing #2879
|
||||||
wrap_fence("source"),
|
wrap_fence("source"),
|
||||||
wrap_fence("code"),
|
wrap_fence("code"),
|
||||||
wrap_fence("pre"),
|
wrap_fence("pre"),
|
||||||
@@ -85,7 +91,7 @@ class Coder:
|
|||||||
max_reflections = 3
|
max_reflections = 3
|
||||||
edit_format = None
|
edit_format = None
|
||||||
yield_stream = False
|
yield_stream = False
|
||||||
temperature = 0
|
temperature = None
|
||||||
auto_lint = True
|
auto_lint = True
|
||||||
auto_test = False
|
auto_test = False
|
||||||
test_cmd = None
|
test_cmd = None
|
||||||
@@ -144,7 +150,13 @@ class Coder:
|
|||||||
# the system prompt.
|
# the system prompt.
|
||||||
done_messages = from_coder.done_messages
|
done_messages = from_coder.done_messages
|
||||||
if edit_format != from_coder.edit_format and done_messages and summarize_from_coder:
|
if edit_format != from_coder.edit_format and done_messages and summarize_from_coder:
|
||||||
done_messages = from_coder.summarizer.summarize_all(done_messages)
|
try:
|
||||||
|
done_messages = from_coder.summarizer.summarize_all(done_messages)
|
||||||
|
except ValueError:
|
||||||
|
# If summarization fails, keep the original messages and warn the user
|
||||||
|
io.tool_warning(
|
||||||
|
"Chat history summarization failed, continuing with full history"
|
||||||
|
)
|
||||||
|
|
||||||
# Bring along context from the old Coder
|
# Bring along context from the old Coder
|
||||||
update = dict(
|
update = dict(
|
||||||
@@ -162,6 +174,7 @@ class Coder:
|
|||||||
use_kwargs.update(kwargs) # override passed kwargs
|
use_kwargs.update(kwargs) # override passed kwargs
|
||||||
|
|
||||||
kwargs = use_kwargs
|
kwargs = use_kwargs
|
||||||
|
from_coder.ok_to_warm_cache = False
|
||||||
|
|
||||||
for coder in coders.__all__:
|
for coder in coders.__all__:
|
||||||
if hasattr(coder, "edit_format") and coder.edit_format == edit_format:
|
if hasattr(coder, "edit_format") and coder.edit_format == edit_format:
|
||||||
@@ -246,6 +259,10 @@ class Coder:
|
|||||||
for fname in self.get_inchat_relative_files():
|
for fname in self.get_inchat_relative_files():
|
||||||
lines.append(f"Added {fname} to the chat.")
|
lines.append(f"Added {fname} to the chat.")
|
||||||
|
|
||||||
|
for fname in self.abs_read_only_fnames:
|
||||||
|
rel_fname = self.get_rel_fname(fname)
|
||||||
|
lines.append(f"Added {rel_fname} to the chat (read-only).")
|
||||||
|
|
||||||
if self.done_messages:
|
if self.done_messages:
|
||||||
lines.append("Restored previous conversation history.")
|
lines.append("Restored previous conversation history.")
|
||||||
|
|
||||||
@@ -254,6 +271,8 @@ class Coder:
|
|||||||
|
|
||||||
return lines
|
return lines
|
||||||
|
|
||||||
|
ok_to_warm_cache = False
|
||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
main_model,
|
main_model,
|
||||||
@@ -362,6 +381,10 @@ class Coder:
|
|||||||
self.pretty = self.io.pretty
|
self.pretty = self.io.pretty
|
||||||
|
|
||||||
self.main_model = main_model
|
self.main_model = main_model
|
||||||
|
# Set the reasoning tag name based on model settings or default
|
||||||
|
self.reasoning_tag_name = (
|
||||||
|
self.main_model.reasoning_tag if self.main_model.reasoning_tag else REASONING_TAG
|
||||||
|
)
|
||||||
|
|
||||||
self.stream = stream and main_model.streaming
|
self.stream = stream and main_model.streaming
|
||||||
|
|
||||||
@@ -455,6 +478,7 @@ class Coder:
|
|||||||
|
|
||||||
self.summarizer_thread = None
|
self.summarizer_thread = None
|
||||||
self.summarized_done_messages = []
|
self.summarized_done_messages = []
|
||||||
|
self.summarizing_messages = None
|
||||||
|
|
||||||
if not self.done_messages and restore_chat_history:
|
if not self.done_messages and restore_chat_history:
|
||||||
history_md = self.io.read_text(self.io.chat_history_file)
|
history_md = self.io.read_text(self.io.chat_history_file)
|
||||||
@@ -938,8 +962,9 @@ class Coder:
|
|||||||
self.summarizer_thread.start()
|
self.summarizer_thread.start()
|
||||||
|
|
||||||
def summarize_worker(self):
|
def summarize_worker(self):
|
||||||
|
self.summarizing_messages = list(self.done_messages)
|
||||||
try:
|
try:
|
||||||
self.summarized_done_messages = self.summarizer.summarize(self.done_messages)
|
self.summarized_done_messages = self.summarizer.summarize(self.summarizing_messages)
|
||||||
except ValueError as err:
|
except ValueError as err:
|
||||||
self.io.tool_warning(err.args[0])
|
self.io.tool_warning(err.args[0])
|
||||||
|
|
||||||
@@ -953,7 +978,9 @@ class Coder:
|
|||||||
self.summarizer_thread.join()
|
self.summarizer_thread.join()
|
||||||
self.summarizer_thread = None
|
self.summarizer_thread = None
|
||||||
|
|
||||||
self.done_messages = self.summarized_done_messages
|
if self.summarizing_messages == self.done_messages:
|
||||||
|
self.done_messages = self.summarized_done_messages
|
||||||
|
self.summarizing_messages = None
|
||||||
self.summarized_done_messages = []
|
self.summarized_done_messages = []
|
||||||
|
|
||||||
def move_back_cur_messages(self, message):
|
def move_back_cur_messages(self, message):
|
||||||
@@ -1047,14 +1074,26 @@ class Coder:
|
|||||||
else:
|
else:
|
||||||
language = "the same language they are using"
|
language = "the same language they are using"
|
||||||
|
|
||||||
|
if self.fence[0] == "`" * 4:
|
||||||
|
quad_backtick_reminder = (
|
||||||
|
"\nIMPORTANT: Use *quadruple* backticks ```` as fences, not triple backticks!\n"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
quad_backtick_reminder = ""
|
||||||
|
|
||||||
prompt = prompt.format(
|
prompt = prompt.format(
|
||||||
fence=self.fence,
|
fence=self.fence,
|
||||||
|
quad_backtick_reminder=quad_backtick_reminder,
|
||||||
lazy_prompt=lazy_prompt,
|
lazy_prompt=lazy_prompt,
|
||||||
platform=platform_text,
|
platform=platform_text,
|
||||||
shell_cmd_prompt=shell_cmd_prompt,
|
shell_cmd_prompt=shell_cmd_prompt,
|
||||||
shell_cmd_reminder=shell_cmd_reminder,
|
shell_cmd_reminder=shell_cmd_reminder,
|
||||||
language=language,
|
language=language,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
if self.main_model.system_prompt_prefix:
|
||||||
|
prompt = self.main_model.system_prompt_prefix + prompt
|
||||||
|
|
||||||
return prompt
|
return prompt
|
||||||
|
|
||||||
def format_chat_chunks(self):
|
def format_chat_chunks(self):
|
||||||
@@ -1174,8 +1213,11 @@ class Coder:
|
|||||||
return
|
return
|
||||||
if not self.num_cache_warming_pings:
|
if not self.num_cache_warming_pings:
|
||||||
return
|
return
|
||||||
|
if not self.ok_to_warm_cache:
|
||||||
|
return
|
||||||
|
|
||||||
delay = 5 * 60 - 5
|
delay = 5 * 60 - 5
|
||||||
|
delay = float(os.environ.get("AIDER_CACHE_KEEPALIVE_DELAY", delay))
|
||||||
self.next_cache_warm = time.time() + delay
|
self.next_cache_warm = time.time() + delay
|
||||||
self.warming_pings_left = self.num_cache_warming_pings
|
self.warming_pings_left = self.num_cache_warming_pings
|
||||||
self.cache_warming_chunks = chunks
|
self.cache_warming_chunks = chunks
|
||||||
@@ -1184,7 +1226,7 @@ class Coder:
|
|||||||
return
|
return
|
||||||
|
|
||||||
def warm_cache_worker():
|
def warm_cache_worker():
|
||||||
while True:
|
while self.ok_to_warm_cache:
|
||||||
time.sleep(1)
|
time.sleep(1)
|
||||||
if self.warming_pings_left <= 0:
|
if self.warming_pings_left <= 0:
|
||||||
continue
|
continue
|
||||||
@@ -1222,15 +1264,43 @@ class Coder:
|
|||||||
|
|
||||||
return chunks
|
return chunks
|
||||||
|
|
||||||
|
def check_tokens(self, messages):
|
||||||
|
"""Check if the messages will fit within the model's token limits."""
|
||||||
|
input_tokens = self.main_model.token_count(messages)
|
||||||
|
max_input_tokens = self.main_model.info.get("max_input_tokens") or 0
|
||||||
|
|
||||||
|
if max_input_tokens and input_tokens >= max_input_tokens:
|
||||||
|
self.io.tool_error(
|
||||||
|
f"Your estimated chat context of {input_tokens:,} tokens exceeds the"
|
||||||
|
f" {max_input_tokens:,} token limit for {self.main_model.name}!"
|
||||||
|
)
|
||||||
|
self.io.tool_output("To reduce the chat context:")
|
||||||
|
self.io.tool_output("- Use /drop to remove unneeded files from the chat")
|
||||||
|
self.io.tool_output("- Use /clear to clear the chat history")
|
||||||
|
self.io.tool_output("- Break your code into smaller files")
|
||||||
|
self.io.tool_output(
|
||||||
|
"It's probably safe to try and send the request, most providers won't charge if"
|
||||||
|
" the context limit is exceeded."
|
||||||
|
)
|
||||||
|
|
||||||
|
if not self.io.confirm_ask("Try to proceed anyway?"):
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
def send_message(self, inp):
|
def send_message(self, inp):
|
||||||
self.event("message_send_starting")
|
self.event("message_send_starting")
|
||||||
|
|
||||||
|
# Notify IO that LLM processing is starting
|
||||||
|
self.io.llm_started()
|
||||||
|
|
||||||
self.cur_messages += [
|
self.cur_messages += [
|
||||||
dict(role="user", content=inp),
|
dict(role="user", content=inp),
|
||||||
]
|
]
|
||||||
|
|
||||||
chunks = self.format_messages()
|
chunks = self.format_messages()
|
||||||
messages = chunks.all_messages()
|
messages = chunks.all_messages()
|
||||||
|
if not self.check_tokens(messages):
|
||||||
|
return
|
||||||
self.warm_cache(chunks)
|
self.warm_cache(chunks)
|
||||||
|
|
||||||
if self.verbose:
|
if self.verbose:
|
||||||
@@ -1291,7 +1361,7 @@ class Coder:
|
|||||||
exhausted = True
|
exhausted = True
|
||||||
break
|
break
|
||||||
|
|
||||||
self.multi_response_content = self.get_multi_response_content()
|
self.multi_response_content = self.get_multi_response_content_in_progress()
|
||||||
|
|
||||||
if messages[-1]["role"] == "assistant":
|
if messages[-1]["role"] == "assistant":
|
||||||
messages[-1]["content"] = self.multi_response_content
|
messages[-1]["content"] = self.multi_response_content
|
||||||
@@ -1311,14 +1381,30 @@ class Coder:
|
|||||||
self.live_incremental_response(True)
|
self.live_incremental_response(True)
|
||||||
self.mdstream = None
|
self.mdstream = None
|
||||||
|
|
||||||
self.partial_response_content = self.get_multi_response_content(True)
|
self.partial_response_content = self.get_multi_response_content_in_progress(True)
|
||||||
|
self.remove_reasoning_content()
|
||||||
self.multi_response_content = ""
|
self.multi_response_content = ""
|
||||||
|
|
||||||
|
###
|
||||||
|
# print()
|
||||||
|
# print("=" * 20)
|
||||||
|
# dump(self.partial_response_content)
|
||||||
|
|
||||||
self.io.tool_output()
|
self.io.tool_output()
|
||||||
|
|
||||||
self.show_usage_report()
|
self.show_usage_report()
|
||||||
|
|
||||||
|
self.add_assistant_reply_to_cur_messages()
|
||||||
|
|
||||||
if exhausted:
|
if exhausted:
|
||||||
|
if self.cur_messages and self.cur_messages[-1]["role"] == "user":
|
||||||
|
self.cur_messages += [
|
||||||
|
dict(
|
||||||
|
role="assistant",
|
||||||
|
content="FinishReasonLength exception: you sent too many tokens",
|
||||||
|
),
|
||||||
|
]
|
||||||
|
|
||||||
self.show_exhausted_error()
|
self.show_exhausted_error()
|
||||||
self.num_exhausted_context_windows += 1
|
self.num_exhausted_context_windows += 1
|
||||||
return
|
return
|
||||||
@@ -1349,14 +1435,17 @@ class Coder:
|
|||||||
interrupted = True
|
interrupted = True
|
||||||
|
|
||||||
if interrupted:
|
if interrupted:
|
||||||
content += "\n^C KeyboardInterrupt"
|
if self.cur_messages and self.cur_messages[-1]["role"] == "user":
|
||||||
self.cur_messages += [dict(role="assistant", content=content)]
|
self.cur_messages[-1]["content"] += "\n^C KeyboardInterrupt"
|
||||||
|
else:
|
||||||
|
self.cur_messages += [dict(role="user", content="^C KeyboardInterrupt")]
|
||||||
|
self.cur_messages += [
|
||||||
|
dict(role="assistant", content="I see that you interrupted my previous reply.")
|
||||||
|
]
|
||||||
return
|
return
|
||||||
|
|
||||||
edited = self.apply_updates()
|
edited = self.apply_updates()
|
||||||
|
|
||||||
self.update_cur_messages()
|
|
||||||
|
|
||||||
if edited:
|
if edited:
|
||||||
self.aider_edited_files.update(edited)
|
self.aider_edited_files.update(edited)
|
||||||
saved_message = self.auto_commit(edited)
|
saved_message = self.auto_commit(edited)
|
||||||
@@ -1377,7 +1466,6 @@ class Coder:
|
|||||||
ok = self.io.confirm_ask("Attempt to fix lint errors?")
|
ok = self.io.confirm_ask("Attempt to fix lint errors?")
|
||||||
if ok:
|
if ok:
|
||||||
self.reflected_message = lint_errors
|
self.reflected_message = lint_errors
|
||||||
self.update_cur_messages()
|
|
||||||
return
|
return
|
||||||
|
|
||||||
shared_output = self.run_shell_commands()
|
shared_output = self.run_shell_commands()
|
||||||
@@ -1394,7 +1482,6 @@ class Coder:
|
|||||||
ok = self.io.confirm_ask("Attempt to fix test errors?")
|
ok = self.io.confirm_ask("Attempt to fix test errors?")
|
||||||
if ok:
|
if ok:
|
||||||
self.reflected_message = test_errors
|
self.reflected_message = test_errors
|
||||||
self.update_cur_messages()
|
|
||||||
return
|
return
|
||||||
|
|
||||||
def reply_completed(self):
|
def reply_completed(self):
|
||||||
@@ -1470,7 +1557,11 @@ class Coder:
|
|||||||
|
|
||||||
return res
|
return res
|
||||||
|
|
||||||
def update_cur_messages(self):
|
def __del__(self):
|
||||||
|
"""Cleanup when the Coder object is destroyed."""
|
||||||
|
self.ok_to_warm_cache = False
|
||||||
|
|
||||||
|
def add_assistant_reply_to_cur_messages(self):
|
||||||
if self.partial_response_content:
|
if self.partial_response_content:
|
||||||
self.cur_messages += [dict(role="assistant", content=self.partial_response_content)]
|
self.cur_messages += [dict(role="assistant", content=self.partial_response_content)]
|
||||||
if self.partial_response_function_call:
|
if self.partial_response_function_call:
|
||||||
@@ -1536,7 +1627,9 @@ class Coder:
|
|||||||
added_fnames = []
|
added_fnames = []
|
||||||
group = ConfirmGroup(new_mentions)
|
group = ConfirmGroup(new_mentions)
|
||||||
for rel_fname in sorted(new_mentions):
|
for rel_fname in sorted(new_mentions):
|
||||||
if self.io.confirm_ask(f"Add {rel_fname} to the chat?", group=group, allow_never=True):
|
if self.io.confirm_ask(
|
||||||
|
"Add file to the chat?", subject=rel_fname, group=group, allow_never=True
|
||||||
|
):
|
||||||
self.add_rel_fname(rel_fname)
|
self.add_rel_fname(rel_fname)
|
||||||
added_fnames.append(rel_fname)
|
added_fnames.append(rel_fname)
|
||||||
else:
|
else:
|
||||||
@@ -1546,6 +1639,9 @@ class Coder:
|
|||||||
return prompts.added_files.format(fnames=", ".join(added_fnames))
|
return prompts.added_files.format(fnames=", ".join(added_fnames))
|
||||||
|
|
||||||
def send(self, messages, model=None, functions=None):
|
def send(self, messages, model=None, functions=None):
|
||||||
|
self.got_reasoning_content = False
|
||||||
|
self.ended_reasoning_content = False
|
||||||
|
|
||||||
if not model:
|
if not model:
|
||||||
model = self.main_model
|
model = self.main_model
|
||||||
|
|
||||||
@@ -1554,20 +1650,13 @@ class Coder:
|
|||||||
|
|
||||||
self.io.log_llm_history("TO LLM", format_messages(messages))
|
self.io.log_llm_history("TO LLM", format_messages(messages))
|
||||||
|
|
||||||
if self.main_model.use_temperature:
|
|
||||||
temp = self.temperature
|
|
||||||
else:
|
|
||||||
temp = None
|
|
||||||
|
|
||||||
completion = None
|
completion = None
|
||||||
try:
|
try:
|
||||||
hash_object, completion = send_completion(
|
hash_object, completion = model.send_completion(
|
||||||
model.name,
|
|
||||||
messages,
|
messages,
|
||||||
functions,
|
functions,
|
||||||
self.stream,
|
self.stream,
|
||||||
temp,
|
self.temperature,
|
||||||
extra_params=model.extra_params,
|
|
||||||
)
|
)
|
||||||
self.chat_completion_call_hashes.append(hash_object.hexdigest())
|
self.chat_completion_call_hashes.append(hash_object.hexdigest())
|
||||||
|
|
||||||
@@ -1620,6 +1709,14 @@ class Coder:
|
|||||||
except AttributeError as func_err:
|
except AttributeError as func_err:
|
||||||
show_func_err = func_err
|
show_func_err = func_err
|
||||||
|
|
||||||
|
try:
|
||||||
|
reasoning_content = completion.choices[0].message.reasoning_content
|
||||||
|
except AttributeError:
|
||||||
|
try:
|
||||||
|
reasoning_content = completion.choices[0].message.reasoning
|
||||||
|
except AttributeError:
|
||||||
|
reasoning_content = None
|
||||||
|
|
||||||
try:
|
try:
|
||||||
self.partial_response_content = completion.choices[0].message.content or ""
|
self.partial_response_content = completion.choices[0].message.content or ""
|
||||||
except AttributeError as content_err:
|
except AttributeError as content_err:
|
||||||
@@ -1638,6 +1735,15 @@ class Coder:
|
|||||||
raise Exception("No data found in LLM response!")
|
raise Exception("No data found in LLM response!")
|
||||||
|
|
||||||
show_resp = self.render_incremental_response(True)
|
show_resp = self.render_incremental_response(True)
|
||||||
|
|
||||||
|
if reasoning_content:
|
||||||
|
formatted_reasoning = format_reasoning_content(
|
||||||
|
reasoning_content, self.reasoning_tag_name
|
||||||
|
)
|
||||||
|
show_resp = formatted_reasoning + show_resp
|
||||||
|
|
||||||
|
show_resp = replace_reasoning_tags(show_resp, self.reasoning_tag_name)
|
||||||
|
|
||||||
self.io.assistant_output(show_resp, pretty=self.show_pretty())
|
self.io.assistant_output(show_resp, pretty=self.show_pretty())
|
||||||
|
|
||||||
if (
|
if (
|
||||||
@@ -1647,6 +1753,8 @@ class Coder:
|
|||||||
raise FinishReasonLength()
|
raise FinishReasonLength()
|
||||||
|
|
||||||
def show_send_output_stream(self, completion):
|
def show_send_output_stream(self, completion):
|
||||||
|
received_content = False
|
||||||
|
|
||||||
for chunk in completion:
|
for chunk in completion:
|
||||||
if len(chunk.choices) == 0:
|
if len(chunk.choices) == 0:
|
||||||
continue
|
continue
|
||||||
@@ -1665,19 +1773,46 @@ class Coder:
|
|||||||
self.partial_response_function_call[k] += v
|
self.partial_response_function_call[k] += v
|
||||||
else:
|
else:
|
||||||
self.partial_response_function_call[k] = v
|
self.partial_response_function_call[k] = v
|
||||||
|
received_content = True
|
||||||
except AttributeError:
|
except AttributeError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
text = ""
|
||||||
|
|
||||||
try:
|
try:
|
||||||
text = chunk.choices[0].delta.content
|
reasoning_content = chunk.choices[0].delta.reasoning_content
|
||||||
if text:
|
|
||||||
self.partial_response_content += text
|
|
||||||
except AttributeError:
|
except AttributeError:
|
||||||
text = None
|
try:
|
||||||
|
reasoning_content = chunk.choices[0].delta.reasoning
|
||||||
|
except AttributeError:
|
||||||
|
reasoning_content = None
|
||||||
|
|
||||||
|
if reasoning_content:
|
||||||
|
if not self.got_reasoning_content:
|
||||||
|
text += f"<{REASONING_TAG}>\n\n"
|
||||||
|
text += reasoning_content
|
||||||
|
self.got_reasoning_content = True
|
||||||
|
received_content = True
|
||||||
|
|
||||||
|
try:
|
||||||
|
content = chunk.choices[0].delta.content
|
||||||
|
if content:
|
||||||
|
if self.got_reasoning_content and not self.ended_reasoning_content:
|
||||||
|
text += f"\n\n</{self.reasoning_tag_name}>\n\n"
|
||||||
|
self.ended_reasoning_content = True
|
||||||
|
|
||||||
|
text += content
|
||||||
|
received_content = True
|
||||||
|
except AttributeError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
self.partial_response_content += text
|
||||||
|
|
||||||
if self.show_pretty():
|
if self.show_pretty():
|
||||||
self.live_incremental_response(False)
|
self.live_incremental_response(False)
|
||||||
elif text:
|
elif text:
|
||||||
|
# Apply reasoning tag formatting
|
||||||
|
text = replace_reasoning_tags(text, self.reasoning_tag_name)
|
||||||
try:
|
try:
|
||||||
sys.stdout.write(text)
|
sys.stdout.write(text)
|
||||||
except UnicodeEncodeError:
|
except UnicodeEncodeError:
|
||||||
@@ -1689,12 +1824,25 @@ class Coder:
|
|||||||
sys.stdout.flush()
|
sys.stdout.flush()
|
||||||
yield text
|
yield text
|
||||||
|
|
||||||
|
if not received_content:
|
||||||
|
self.io.tool_warning("Empty response received from LLM. Check your provider account?")
|
||||||
|
|
||||||
def live_incremental_response(self, final):
|
def live_incremental_response(self, final):
|
||||||
show_resp = self.render_incremental_response(final)
|
show_resp = self.render_incremental_response(final)
|
||||||
|
# Apply any reasoning tag formatting
|
||||||
|
show_resp = replace_reasoning_tags(show_resp, self.reasoning_tag_name)
|
||||||
self.mdstream.update(show_resp, final=final)
|
self.mdstream.update(show_resp, final=final)
|
||||||
|
|
||||||
def render_incremental_response(self, final):
|
def render_incremental_response(self, final):
|
||||||
return self.get_multi_response_content()
|
return self.get_multi_response_content_in_progress()
|
||||||
|
|
||||||
|
def remove_reasoning_content(self):
|
||||||
|
"""Remove reasoning content from the model's response."""
|
||||||
|
|
||||||
|
self.partial_response_content = remove_reasoning_content(
|
||||||
|
self.partial_response_content,
|
||||||
|
self.reasoning_tag_name,
|
||||||
|
)
|
||||||
|
|
||||||
def calculate_and_show_tokens_and_cost(self, messages, completion=None):
|
def calculate_and_show_tokens_and_cost(self, messages, completion=None):
|
||||||
prompt_tokens = 0
|
prompt_tokens = 0
|
||||||
@@ -1817,12 +1965,13 @@ class Coder:
|
|||||||
self.message_tokens_sent = 0
|
self.message_tokens_sent = 0
|
||||||
self.message_tokens_received = 0
|
self.message_tokens_received = 0
|
||||||
|
|
||||||
def get_multi_response_content(self, final=False):
|
def get_multi_response_content_in_progress(self, final=False):
|
||||||
cur = self.multi_response_content or ""
|
cur = self.multi_response_content or ""
|
||||||
new = self.partial_response_content or ""
|
new = self.partial_response_content or ""
|
||||||
|
|
||||||
if new.rstrip() != new and not final:
|
if new.rstrip() != new and not final:
|
||||||
new = new.rstrip()
|
new = new.rstrip()
|
||||||
|
|
||||||
return cur + new
|
return cur + new
|
||||||
|
|
||||||
def get_rel_fname(self, fname):
|
def get_rel_fname(self, fname):
|
||||||
|
|||||||
@@ -401,6 +401,9 @@ missing_filename_err = (
|
|||||||
" {fence[0]}"
|
" {fence[0]}"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Always be willing to treat triple-backticks as a fence when searching for filenames
|
||||||
|
triple_backticks = "`" * 3
|
||||||
|
|
||||||
|
|
||||||
def strip_filename(filename, fence):
|
def strip_filename(filename, fence):
|
||||||
filename = filename.strip()
|
filename = filename.strip()
|
||||||
@@ -409,7 +412,7 @@ def strip_filename(filename, fence):
|
|||||||
return
|
return
|
||||||
|
|
||||||
start_fence = fence[0]
|
start_fence = fence[0]
|
||||||
if filename.startswith(start_fence):
|
if filename.startswith(start_fence) or filename.startswith(triple_backticks):
|
||||||
return
|
return
|
||||||
|
|
||||||
filename = filename.rstrip(":")
|
filename = filename.rstrip(":")
|
||||||
@@ -546,7 +549,7 @@ def find_filename(lines, fence, valid_fnames):
|
|||||||
filenames.append(filename)
|
filenames.append(filename)
|
||||||
|
|
||||||
# Only continue as long as we keep seeing fences
|
# Only continue as long as we keep seeing fences
|
||||||
if not line.startswith(fence[0]):
|
if not line.startswith(fence[0]) and not line.startswith(triple_backticks):
|
||||||
break
|
break
|
||||||
|
|
||||||
if not filenames:
|
if not filenames:
|
||||||
|
|||||||
@@ -157,7 +157,7 @@ Every *SEARCH/REPLACE block* must use this format:
|
|||||||
8. The closing fence: {fence[1]}
|
8. The closing fence: {fence[1]}
|
||||||
|
|
||||||
Use the *FULL* file path, as shown to you by the user.
|
Use the *FULL* file path, as shown to you by the user.
|
||||||
|
{quad_backtick_reminder}
|
||||||
Every *SEARCH* section must *EXACTLY MATCH* the existing file content, character for character, including all comments, docstrings, etc.
|
Every *SEARCH* section must *EXACTLY MATCH* the existing file content, character for character, including all comments, docstrings, etc.
|
||||||
If the file contains code or other data wrapped/escaped in json/xml/quotes or other containers, you need to propose edits to the literal contents of the file, including the container markup.
|
If the file contains code or other data wrapped/escaped in json/xml/quotes or other containers, you need to propose edits to the literal contents of the file, including the container markup.
|
||||||
|
|
||||||
|
|||||||
@@ -38,7 +38,7 @@ class SingleWholeFileFunctionCoder(Coder):
|
|||||||
self.gpt_prompts = SingleWholeFileFunctionPrompts()
|
self.gpt_prompts = SingleWholeFileFunctionPrompts()
|
||||||
super().__init__(*args, **kwargs)
|
super().__init__(*args, **kwargs)
|
||||||
|
|
||||||
def update_cur_messages(self, edited):
|
def add_assistant_reply_to_cur_messages(self, edited):
|
||||||
if edited:
|
if edited:
|
||||||
self.cur_messages += [
|
self.cur_messages += [
|
||||||
dict(role="assistant", content=self.gpt_prompts.redacted_edit_message)
|
dict(role="assistant", content=self.gpt_prompts.redacted_edit_message)
|
||||||
|
|||||||
@@ -17,10 +17,10 @@ class WholeFileCoder(Coder):
|
|||||||
try:
|
try:
|
||||||
return self.get_edits(mode="diff")
|
return self.get_edits(mode="diff")
|
||||||
except ValueError:
|
except ValueError:
|
||||||
return self.get_multi_response_content()
|
return self.get_multi_response_content_in_progress()
|
||||||
|
|
||||||
def get_edits(self, mode="update"):
|
def get_edits(self, mode="update"):
|
||||||
content = self.get_multi_response_content()
|
content = self.get_multi_response_content_in_progress()
|
||||||
|
|
||||||
chat_files = self.get_inchat_relative_files()
|
chat_files = self.get_inchat_relative_files()
|
||||||
|
|
||||||
|
|||||||
@@ -49,7 +49,7 @@ class WholeFileFunctionCoder(Coder):
|
|||||||
self.gpt_prompts = WholeFileFunctionPrompts()
|
self.gpt_prompts = WholeFileFunctionPrompts()
|
||||||
super().__init__(*args, **kwargs)
|
super().__init__(*args, **kwargs)
|
||||||
|
|
||||||
def update_cur_messages(self, edited):
|
def add_assistant_reply_to_cur_messages(self, edited):
|
||||||
if edited:
|
if edited:
|
||||||
self.cur_messages += [
|
self.cur_messages += [
|
||||||
dict(role="assistant", content=self.gpt_prompts.redacted_edit_message)
|
dict(role="assistant", content=self.gpt_prompts.redacted_edit_message)
|
||||||
|
|||||||
@@ -81,7 +81,7 @@ class Commands:
|
|||||||
"Switch to a new LLM"
|
"Switch to a new LLM"
|
||||||
|
|
||||||
model_name = args.strip()
|
model_name = args.strip()
|
||||||
model = models.Model(model_name)
|
model = models.Model(model_name, weak_model=self.coder.main_model.weak_model.name)
|
||||||
models.sanity_check_models(self.io, model)
|
models.sanity_check_models(self.io, model)
|
||||||
raise SwitchCoder(main_model=model)
|
raise SwitchCoder(main_model=model)
|
||||||
|
|
||||||
@@ -404,6 +404,7 @@ class Commands:
|
|||||||
|
|
||||||
fence = "`" * 3
|
fence = "`" * 3
|
||||||
|
|
||||||
|
file_res = []
|
||||||
# files
|
# files
|
||||||
for fname in self.coder.abs_fnames:
|
for fname in self.coder.abs_fnames:
|
||||||
relative_fname = self.coder.get_rel_fname(fname)
|
relative_fname = self.coder.get_rel_fname(fname)
|
||||||
@@ -414,7 +415,7 @@ class Commands:
|
|||||||
# approximate
|
# approximate
|
||||||
content = f"{relative_fname}\n{fence}\n" + content + "{fence}\n"
|
content = f"{relative_fname}\n{fence}\n" + content + "{fence}\n"
|
||||||
tokens = self.coder.main_model.token_count(content)
|
tokens = self.coder.main_model.token_count(content)
|
||||||
res.append((tokens, f"{relative_fname}", "/drop to remove"))
|
file_res.append((tokens, f"{relative_fname}", "/drop to remove"))
|
||||||
|
|
||||||
# read-only files
|
# read-only files
|
||||||
for fname in self.coder.abs_read_only_fnames:
|
for fname in self.coder.abs_read_only_fnames:
|
||||||
@@ -424,7 +425,10 @@ class Commands:
|
|||||||
# approximate
|
# approximate
|
||||||
content = f"{relative_fname}\n{fence}\n" + content + "{fence}\n"
|
content = f"{relative_fname}\n{fence}\n" + content + "{fence}\n"
|
||||||
tokens = self.coder.main_model.token_count(content)
|
tokens = self.coder.main_model.token_count(content)
|
||||||
res.append((tokens, f"{relative_fname} (read-only)", "/drop to remove"))
|
file_res.append((tokens, f"{relative_fname} (read-only)", "/drop to remove"))
|
||||||
|
|
||||||
|
file_res.sort()
|
||||||
|
res.extend(file_res)
|
||||||
|
|
||||||
self.io.tool_output(
|
self.io.tool_output(
|
||||||
f"Approximate context window usage for {self.coder.main_model.name}, in tokens:"
|
f"Approximate context window usage for {self.coder.main_model.name}, in tokens:"
|
||||||
@@ -756,6 +760,7 @@ class Commands:
|
|||||||
|
|
||||||
if self.io.confirm_ask(f"No files matched '{word}'. Do you want to create {fname}?"):
|
if self.io.confirm_ask(f"No files matched '{word}'. Do you want to create {fname}?"):
|
||||||
try:
|
try:
|
||||||
|
fname.parent.mkdir(parents=True, exist_ok=True)
|
||||||
fname.touch()
|
fname.touch()
|
||||||
all_matched_files.add(str(fname))
|
all_matched_files.add(str(fname))
|
||||||
except OSError as e:
|
except OSError as e:
|
||||||
@@ -1061,18 +1066,15 @@ class Commands:
|
|||||||
)
|
)
|
||||||
|
|
||||||
def cmd_ask(self, args):
|
def cmd_ask(self, args):
|
||||||
"""Ask questions about the code base without editing any files.
|
"""Ask questions about the code base without editing any files. If no prompt provided, switches to ask mode.""" # noqa
|
||||||
If no prompt is provided, switches to ask mode."""
|
|
||||||
return self._generic_chat_command(args, "ask")
|
return self._generic_chat_command(args, "ask")
|
||||||
|
|
||||||
def cmd_code(self, args):
|
def cmd_code(self, args):
|
||||||
"""Ask for changes to your code.
|
"""Ask for changes to your code. If no prompt provided, switches to code mode.""" # noqa
|
||||||
If no prompt is provided, switches to code mode."""
|
|
||||||
return self._generic_chat_command(args, self.coder.main_model.edit_format)
|
return self._generic_chat_command(args, self.coder.main_model.edit_format)
|
||||||
|
|
||||||
def cmd_architect(self, args):
|
def cmd_architect(self, args):
|
||||||
"""Enter architect mode to discuss high-level design and architecture.
|
"""Enter architect/editor mode using 2 different models. If no prompt provided, switches to architect/editor mode.""" # noqa
|
||||||
If no prompt is provided, switches to architect mode."""
|
|
||||||
return self._generic_chat_command(args, "architect")
|
return self._generic_chat_command(args, "architect")
|
||||||
|
|
||||||
def _generic_chat_command(self, args, edit_format):
|
def _generic_chat_command(self, args, edit_format):
|
||||||
|
|||||||
125
aider/deprecated.py
Normal file
125
aider/deprecated.py
Normal file
@@ -0,0 +1,125 @@
|
|||||||
|
def add_deprecated_model_args(parser, group):
|
||||||
|
"""Add deprecated model shortcut arguments to the argparse parser."""
|
||||||
|
opus_model = "claude-3-opus-20240229"
|
||||||
|
group.add_argument(
|
||||||
|
"--opus",
|
||||||
|
action="store_true",
|
||||||
|
help=f"Use {opus_model} model for the main chat (deprecated, use --model)",
|
||||||
|
default=False,
|
||||||
|
)
|
||||||
|
sonnet_model = "anthropic/claude-3-7-sonnet-20250219"
|
||||||
|
group.add_argument(
|
||||||
|
"--sonnet",
|
||||||
|
action="store_true",
|
||||||
|
help=f"Use {sonnet_model} model for the main chat (deprecated, use --model)",
|
||||||
|
default=False,
|
||||||
|
)
|
||||||
|
haiku_model = "claude-3-5-haiku-20241022"
|
||||||
|
group.add_argument(
|
||||||
|
"--haiku",
|
||||||
|
action="store_true",
|
||||||
|
help=f"Use {haiku_model} model for the main chat (deprecated, use --model)",
|
||||||
|
default=False,
|
||||||
|
)
|
||||||
|
gpt_4_model = "gpt-4-0613"
|
||||||
|
group.add_argument(
|
||||||
|
"--4",
|
||||||
|
"-4",
|
||||||
|
action="store_true",
|
||||||
|
help=f"Use {gpt_4_model} model for the main chat (deprecated, use --model)",
|
||||||
|
default=False,
|
||||||
|
)
|
||||||
|
gpt_4o_model = "gpt-4o"
|
||||||
|
group.add_argument(
|
||||||
|
"--4o",
|
||||||
|
action="store_true",
|
||||||
|
help=f"Use {gpt_4o_model} model for the main chat (deprecated, use --model)",
|
||||||
|
default=False,
|
||||||
|
)
|
||||||
|
gpt_4o_mini_model = "gpt-4o-mini"
|
||||||
|
group.add_argument(
|
||||||
|
"--mini",
|
||||||
|
action="store_true",
|
||||||
|
help=f"Use {gpt_4o_mini_model} model for the main chat (deprecated, use --model)",
|
||||||
|
default=False,
|
||||||
|
)
|
||||||
|
gpt_4_turbo_model = "gpt-4-1106-preview"
|
||||||
|
group.add_argument(
|
||||||
|
"--4-turbo",
|
||||||
|
action="store_true",
|
||||||
|
help=f"Use {gpt_4_turbo_model} model for the main chat (deprecated, use --model)",
|
||||||
|
default=False,
|
||||||
|
)
|
||||||
|
gpt_3_model_name = "gpt-3.5-turbo"
|
||||||
|
group.add_argument(
|
||||||
|
"--35turbo",
|
||||||
|
"--35-turbo",
|
||||||
|
"--3",
|
||||||
|
"-3",
|
||||||
|
action="store_true",
|
||||||
|
help=f"Use {gpt_3_model_name} model for the main chat (deprecated, use --model)",
|
||||||
|
default=False,
|
||||||
|
)
|
||||||
|
deepseek_model = "deepseek/deepseek-chat"
|
||||||
|
group.add_argument(
|
||||||
|
"--deepseek",
|
||||||
|
action="store_true",
|
||||||
|
help=f"Use {deepseek_model} model for the main chat (deprecated, use --model)",
|
||||||
|
default=False,
|
||||||
|
)
|
||||||
|
o1_mini_model = "o1-mini"
|
||||||
|
group.add_argument(
|
||||||
|
"--o1-mini",
|
||||||
|
action="store_true",
|
||||||
|
help=f"Use {o1_mini_model} model for the main chat (deprecated, use --model)",
|
||||||
|
default=False,
|
||||||
|
)
|
||||||
|
o1_preview_model = "o1-preview"
|
||||||
|
group.add_argument(
|
||||||
|
"--o1-preview",
|
||||||
|
action="store_true",
|
||||||
|
help=f"Use {o1_preview_model} model for the main chat (deprecated, use --model)",
|
||||||
|
default=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def handle_deprecated_model_args(args, io):
|
||||||
|
"""Handle deprecated model shortcut arguments and provide appropriate warnings."""
|
||||||
|
# Define model mapping
|
||||||
|
model_map = {
|
||||||
|
"opus": "claude-3-opus-20240229",
|
||||||
|
"sonnet": "anthropic/claude-3-7-sonnet-20250219",
|
||||||
|
"haiku": "claude-3-5-haiku-20241022",
|
||||||
|
"4": "gpt-4-0613",
|
||||||
|
"4o": "gpt-4o",
|
||||||
|
"mini": "gpt-4o-mini",
|
||||||
|
"4_turbo": "gpt-4-1106-preview",
|
||||||
|
"35turbo": "gpt-3.5-turbo",
|
||||||
|
"deepseek": "deepseek/deepseek-chat",
|
||||||
|
"o1_mini": "o1-mini",
|
||||||
|
"o1_preview": "o1-preview",
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check if any deprecated args are used
|
||||||
|
for arg_name, model_name in model_map.items():
|
||||||
|
arg_name_clean = arg_name.replace("-", "_")
|
||||||
|
if hasattr(args, arg_name_clean) and getattr(args, arg_name_clean):
|
||||||
|
# Find preferred name to display in warning
|
||||||
|
from aider.models import MODEL_ALIASES
|
||||||
|
|
||||||
|
display_name = model_name
|
||||||
|
# Check if there's a shorter alias for this model
|
||||||
|
for alias, full_name in MODEL_ALIASES.items():
|
||||||
|
if full_name == model_name:
|
||||||
|
display_name = alias
|
||||||
|
break
|
||||||
|
|
||||||
|
# Show the warning
|
||||||
|
io.tool_warning(
|
||||||
|
f"The --{arg_name.replace('_', '-')} flag is deprecated and will be removed in a"
|
||||||
|
f" future version. Please use --model {display_name} instead."
|
||||||
|
)
|
||||||
|
|
||||||
|
# Set the model
|
||||||
|
args.model = model_name
|
||||||
|
break
|
||||||
@@ -10,12 +10,13 @@ This module provides functionality to:
|
|||||||
|
|
||||||
import os
|
import os
|
||||||
import platform
|
import platform
|
||||||
import shlex
|
|
||||||
import subprocess
|
import subprocess
|
||||||
import tempfile
|
import tempfile
|
||||||
|
|
||||||
from rich.console import Console
|
from rich.console import Console
|
||||||
|
|
||||||
|
from aider.dump import dump # noqa
|
||||||
|
|
||||||
DEFAULT_EDITOR_NIX = "vi"
|
DEFAULT_EDITOR_NIX = "vi"
|
||||||
DEFAULT_EDITOR_OS_X = "vim"
|
DEFAULT_EDITOR_OS_X = "vim"
|
||||||
DEFAULT_EDITOR_WINDOWS = "notepad"
|
DEFAULT_EDITOR_WINDOWS = "notepad"
|
||||||
@@ -87,13 +88,13 @@ def get_environment_editor(default=None):
|
|||||||
|
|
||||||
def discover_editor(editor_override=None):
|
def discover_editor(editor_override=None):
|
||||||
"""
|
"""
|
||||||
Discovers and returns the appropriate editor command as a list of arguments.
|
Discovers and returns the appropriate editor command.
|
||||||
|
|
||||||
Handles cases where the editor command includes arguments, including quoted arguments
|
Handles cases where the editor command includes arguments, including quoted arguments
|
||||||
with spaces (e.g. 'vim -c "set noswapfile"').
|
with spaces (e.g. 'vim -c "set noswapfile"').
|
||||||
|
|
||||||
:return: A list of command parts ready for subprocess execution
|
:return: The editor command as a string
|
||||||
:rtype: list[str]
|
:rtype: str
|
||||||
"""
|
"""
|
||||||
system = platform.system()
|
system = platform.system()
|
||||||
if system == "Windows":
|
if system == "Windows":
|
||||||
@@ -102,14 +103,13 @@ def discover_editor(editor_override=None):
|
|||||||
default_editor = DEFAULT_EDITOR_OS_X
|
default_editor = DEFAULT_EDITOR_OS_X
|
||||||
else:
|
else:
|
||||||
default_editor = DEFAULT_EDITOR_NIX
|
default_editor = DEFAULT_EDITOR_NIX
|
||||||
|
|
||||||
if editor_override:
|
if editor_override:
|
||||||
editor = editor_override
|
editor = editor_override
|
||||||
else:
|
else:
|
||||||
editor = get_environment_editor(default_editor)
|
editor = get_environment_editor(default_editor)
|
||||||
try:
|
|
||||||
return shlex.split(editor)
|
return editor
|
||||||
except ValueError as e:
|
|
||||||
raise RuntimeError(f"Invalid editor command format '{editor}': {e}")
|
|
||||||
|
|
||||||
|
|
||||||
def pipe_editor(input_data="", suffix=None, editor=None):
|
def pipe_editor(input_data="", suffix=None, editor=None):
|
||||||
@@ -128,9 +128,10 @@ def pipe_editor(input_data="", suffix=None, editor=None):
|
|||||||
:rtype: str
|
:rtype: str
|
||||||
"""
|
"""
|
||||||
filepath = write_temp_file(input_data, suffix)
|
filepath = write_temp_file(input_data, suffix)
|
||||||
command_parts = discover_editor(editor)
|
command_str = discover_editor(editor)
|
||||||
command_parts.append(filepath)
|
command_str += " " + filepath
|
||||||
subprocess.call(command_parts)
|
|
||||||
|
subprocess.call(command_str, shell=True)
|
||||||
with open(filepath, "r") as f:
|
with open(filepath, "r") as f:
|
||||||
output_data = f.read()
|
output_data = f.read()
|
||||||
try:
|
try:
|
||||||
|
|||||||
@@ -1,5 +1,7 @@
|
|||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
|
|
||||||
|
from aider.dump import dump # noqa: F401
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class ExInfo:
|
class ExInfo:
|
||||||
@@ -50,6 +52,7 @@ EXCEPTIONS = [
|
|||||||
|
|
||||||
class LiteLLMExceptions:
|
class LiteLLMExceptions:
|
||||||
exceptions = dict()
|
exceptions = dict()
|
||||||
|
exception_info = {exi.name: exi for exi in EXCEPTIONS}
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
self._load()
|
self._load()
|
||||||
@@ -58,20 +61,13 @@ class LiteLLMExceptions:
|
|||||||
import litellm
|
import litellm
|
||||||
|
|
||||||
for var in dir(litellm):
|
for var in dir(litellm):
|
||||||
if not var.endswith("Error"):
|
if var.endswith("Error"):
|
||||||
continue
|
if var not in self.exception_info:
|
||||||
|
raise ValueError(f"{var} is in litellm but not in aider's exceptions list")
|
||||||
ex_info = None
|
|
||||||
for exi in EXCEPTIONS:
|
|
||||||
if var == exi.name:
|
|
||||||
ex_info = exi
|
|
||||||
break
|
|
||||||
|
|
||||||
if strict and not ex_info:
|
|
||||||
raise ValueError(f"{var} is in litellm but not in aider's exceptions list")
|
|
||||||
|
|
||||||
|
for var in self.exception_info:
|
||||||
ex = getattr(litellm, var)
|
ex = getattr(litellm, var)
|
||||||
self.exceptions[ex] = ex_info
|
self.exceptions[ex] = self.exception_info[var]
|
||||||
|
|
||||||
def exceptions_tuple(self):
|
def exceptions_tuple(self):
|
||||||
return tuple(self.exceptions)
|
return tuple(self.exceptions)
|
||||||
|
|||||||
@@ -2,7 +2,6 @@ import argparse
|
|||||||
|
|
||||||
from aider import models, prompts
|
from aider import models, prompts
|
||||||
from aider.dump import dump # noqa: F401
|
from aider.dump import dump # noqa: F401
|
||||||
from aider.sendchat import simple_send_with_retries
|
|
||||||
|
|
||||||
|
|
||||||
class ChatSummary:
|
class ChatSummary:
|
||||||
@@ -26,6 +25,12 @@ class ChatSummary:
|
|||||||
return sized
|
return sized
|
||||||
|
|
||||||
def summarize(self, messages, depth=0):
|
def summarize(self, messages, depth=0):
|
||||||
|
messages = self.summarize_real(messages)
|
||||||
|
if messages and messages[-1]["role"] != "assistant":
|
||||||
|
messages.append(dict(role="assistant", content="Ok."))
|
||||||
|
return messages
|
||||||
|
|
||||||
|
def summarize_real(self, messages, depth=0):
|
||||||
if not self.models:
|
if not self.models:
|
||||||
raise ValueError("No models available for summarization")
|
raise ValueError("No models available for summarization")
|
||||||
|
|
||||||
@@ -88,7 +93,7 @@ class ChatSummary:
|
|||||||
if summary_tokens + tail_tokens < self.max_tokens:
|
if summary_tokens + tail_tokens < self.max_tokens:
|
||||||
return result
|
return result
|
||||||
|
|
||||||
return self.summarize(result, depth + 1)
|
return self.summarize_real(result, depth + 1)
|
||||||
|
|
||||||
def summarize_all(self, messages):
|
def summarize_all(self, messages):
|
||||||
content = ""
|
content = ""
|
||||||
@@ -108,7 +113,7 @@ class ChatSummary:
|
|||||||
|
|
||||||
for model in self.models:
|
for model in self.models:
|
||||||
try:
|
try:
|
||||||
summary = simple_send_with_retries(model, summarize_messages)
|
summary = model.simple_send_with_retries(summarize_messages)
|
||||||
if summary is not None:
|
if summary is not None:
|
||||||
summary = prompts.summary_prefix + summary
|
summary = prompts.summary_prefix + summary
|
||||||
return [dict(role="user", content=summary)]
|
return [dict(role="user", content=summary)]
|
||||||
|
|||||||
240
aider/io.py
240
aider/io.py
@@ -1,6 +1,9 @@
|
|||||||
import base64
|
import base64
|
||||||
|
import functools
|
||||||
import os
|
import os
|
||||||
|
import shutil
|
||||||
import signal
|
import signal
|
||||||
|
import subprocess
|
||||||
import time
|
import time
|
||||||
import webbrowser
|
import webbrowser
|
||||||
from collections import defaultdict
|
from collections import defaultdict
|
||||||
@@ -33,6 +36,37 @@ from aider.mdstream import MarkdownStream
|
|||||||
from .dump import dump # noqa: F401
|
from .dump import dump # noqa: F401
|
||||||
from .utils import is_image_file
|
from .utils import is_image_file
|
||||||
|
|
||||||
|
# Constants
|
||||||
|
NOTIFICATION_MESSAGE = "Aider is waiting for your input"
|
||||||
|
|
||||||
|
|
||||||
|
def ensure_hash_prefix(color):
|
||||||
|
"""Ensure hex color values have a # prefix."""
|
||||||
|
if not color:
|
||||||
|
return color
|
||||||
|
if isinstance(color, str) and color.strip() and not color.startswith("#"):
|
||||||
|
# Check if it's a valid hex color (3 or 6 hex digits)
|
||||||
|
if all(c in "0123456789ABCDEFabcdef" for c in color) and len(color) in (3, 6):
|
||||||
|
return f"#{color}"
|
||||||
|
return color
|
||||||
|
|
||||||
|
|
||||||
|
def restore_multiline(func):
|
||||||
|
"""Decorator to restore multiline mode after function execution"""
|
||||||
|
|
||||||
|
@functools.wraps(func)
|
||||||
|
def wrapper(self, *args, **kwargs):
|
||||||
|
orig_multiline = self.multiline_mode
|
||||||
|
self.multiline_mode = False
|
||||||
|
try:
|
||||||
|
return func(self, *args, **kwargs)
|
||||||
|
except Exception:
|
||||||
|
raise
|
||||||
|
finally:
|
||||||
|
self.multiline_mode = orig_multiline
|
||||||
|
|
||||||
|
return wrapper
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class ConfirmGroup:
|
class ConfirmGroup:
|
||||||
@@ -178,6 +212,8 @@ class InputOutput:
|
|||||||
num_error_outputs = 0
|
num_error_outputs = 0
|
||||||
num_user_asks = 0
|
num_user_asks = 0
|
||||||
clipboard_watcher = None
|
clipboard_watcher = None
|
||||||
|
bell_on_next_input = False
|
||||||
|
notifications_command = None
|
||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
@@ -198,6 +234,7 @@ class InputOutput:
|
|||||||
completion_menu_current_bg_color=None,
|
completion_menu_current_bg_color=None,
|
||||||
code_theme="default",
|
code_theme="default",
|
||||||
encoding="utf-8",
|
encoding="utf-8",
|
||||||
|
line_endings="platform",
|
||||||
dry_run=False,
|
dry_run=False,
|
||||||
llm_history_file=None,
|
llm_history_file=None,
|
||||||
editingmode=EditingMode.EMACS,
|
editingmode=EditingMode.EMACS,
|
||||||
@@ -205,25 +242,40 @@ class InputOutput:
|
|||||||
file_watcher=None,
|
file_watcher=None,
|
||||||
multiline_mode=False,
|
multiline_mode=False,
|
||||||
root=".",
|
root=".",
|
||||||
|
notifications=False,
|
||||||
|
notifications_command=None,
|
||||||
):
|
):
|
||||||
self.placeholder = None
|
self.placeholder = None
|
||||||
self.interrupted = False
|
self.interrupted = False
|
||||||
self.never_prompts = set()
|
self.never_prompts = set()
|
||||||
self.editingmode = editingmode
|
self.editingmode = editingmode
|
||||||
self.multiline_mode = multiline_mode
|
self.multiline_mode = multiline_mode
|
||||||
|
self.bell_on_next_input = False
|
||||||
|
self.notifications = notifications
|
||||||
|
if notifications and notifications_command is None:
|
||||||
|
self.notifications_command = self.get_default_notification_command()
|
||||||
|
else:
|
||||||
|
self.notifications_command = notifications_command
|
||||||
|
|
||||||
no_color = os.environ.get("NO_COLOR")
|
no_color = os.environ.get("NO_COLOR")
|
||||||
if no_color is not None and no_color != "":
|
if no_color is not None and no_color != "":
|
||||||
pretty = False
|
pretty = False
|
||||||
|
|
||||||
self.user_input_color = user_input_color if pretty else None
|
self.user_input_color = ensure_hash_prefix(user_input_color) if pretty else None
|
||||||
self.tool_output_color = tool_output_color if pretty else None
|
self.tool_output_color = ensure_hash_prefix(tool_output_color) if pretty else None
|
||||||
self.tool_error_color = tool_error_color if pretty else None
|
self.tool_error_color = ensure_hash_prefix(tool_error_color) if pretty else None
|
||||||
self.tool_warning_color = tool_warning_color if pretty else None
|
self.tool_warning_color = ensure_hash_prefix(tool_warning_color) if pretty else None
|
||||||
self.assistant_output_color = assistant_output_color
|
self.assistant_output_color = ensure_hash_prefix(assistant_output_color)
|
||||||
self.completion_menu_color = completion_menu_color if pretty else None
|
self.completion_menu_color = ensure_hash_prefix(completion_menu_color) if pretty else None
|
||||||
self.completion_menu_bg_color = completion_menu_bg_color if pretty else None
|
self.completion_menu_bg_color = (
|
||||||
self.completion_menu_current_color = completion_menu_current_color if pretty else None
|
ensure_hash_prefix(completion_menu_bg_color) if pretty else None
|
||||||
self.completion_menu_current_bg_color = completion_menu_current_bg_color if pretty else None
|
)
|
||||||
|
self.completion_menu_current_color = (
|
||||||
|
ensure_hash_prefix(completion_menu_current_color) if pretty else None
|
||||||
|
)
|
||||||
|
self.completion_menu_current_bg_color = (
|
||||||
|
ensure_hash_prefix(completion_menu_current_bg_color) if pretty else None
|
||||||
|
)
|
||||||
|
|
||||||
self.code_theme = code_theme
|
self.code_theme = code_theme
|
||||||
|
|
||||||
@@ -244,6 +296,15 @@ class InputOutput:
|
|||||||
self.chat_history_file = None
|
self.chat_history_file = None
|
||||||
|
|
||||||
self.encoding = encoding
|
self.encoding = encoding
|
||||||
|
valid_line_endings = {"platform", "lf", "crlf"}
|
||||||
|
if line_endings not in valid_line_endings:
|
||||||
|
raise ValueError(
|
||||||
|
f"Invalid line_endings value: {line_endings}. "
|
||||||
|
f"Must be one of: {', '.join(valid_line_endings)}"
|
||||||
|
)
|
||||||
|
self.newline = (
|
||||||
|
None if line_endings == "platform" else "\n" if line_endings == "lf" else "\r\n"
|
||||||
|
)
|
||||||
self.dry_run = dry_run
|
self.dry_run = dry_run
|
||||||
|
|
||||||
current_time = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
|
current_time = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
|
||||||
@@ -342,10 +403,6 @@ class InputOutput:
|
|||||||
try:
|
try:
|
||||||
with open(str(filename), "r", encoding=self.encoding) as f:
|
with open(str(filename), "r", encoding=self.encoding) as f:
|
||||||
return f.read()
|
return f.read()
|
||||||
except OSError as err:
|
|
||||||
if not silent:
|
|
||||||
self.tool_error(f"{filename}: unable to read: {err}")
|
|
||||||
return
|
|
||||||
except FileNotFoundError:
|
except FileNotFoundError:
|
||||||
if not silent:
|
if not silent:
|
||||||
self.tool_error(f"{filename}: file not found error")
|
self.tool_error(f"{filename}: file not found error")
|
||||||
@@ -354,6 +411,10 @@ class InputOutput:
|
|||||||
if not silent:
|
if not silent:
|
||||||
self.tool_error(f"{filename}: is a directory")
|
self.tool_error(f"{filename}: is a directory")
|
||||||
return
|
return
|
||||||
|
except OSError as err:
|
||||||
|
if not silent:
|
||||||
|
self.tool_error(f"{filename}: unable to read: {err}")
|
||||||
|
return
|
||||||
except UnicodeError as e:
|
except UnicodeError as e:
|
||||||
if not silent:
|
if not silent:
|
||||||
self.tool_error(f"{filename}: {e}")
|
self.tool_error(f"{filename}: {e}")
|
||||||
@@ -375,7 +436,7 @@ class InputOutput:
|
|||||||
delay = initial_delay
|
delay = initial_delay
|
||||||
for attempt in range(max_retries):
|
for attempt in range(max_retries):
|
||||||
try:
|
try:
|
||||||
with open(str(filename), "w", encoding=self.encoding) as f:
|
with open(str(filename), "w", encoding=self.encoding, newline=self.newline) as f:
|
||||||
f.write(content)
|
f.write(content)
|
||||||
return # Successfully wrote the file
|
return # Successfully wrote the file
|
||||||
except PermissionError as err:
|
except PermissionError as err:
|
||||||
@@ -416,6 +477,9 @@ class InputOutput:
|
|||||||
):
|
):
|
||||||
self.rule()
|
self.rule()
|
||||||
|
|
||||||
|
# Ring the bell if needed
|
||||||
|
self.ring_bell()
|
||||||
|
|
||||||
rel_fnames = list(rel_fnames)
|
rel_fnames = list(rel_fnames)
|
||||||
show = ""
|
show = ""
|
||||||
if rel_fnames:
|
if rel_fnames:
|
||||||
@@ -508,6 +572,9 @@ class InputOutput:
|
|||||||
if self.clipboard_watcher:
|
if self.clipboard_watcher:
|
||||||
self.clipboard_watcher.start()
|
self.clipboard_watcher.start()
|
||||||
|
|
||||||
|
def get_continuation(width, line_number, is_soft_wrap):
|
||||||
|
return ". "
|
||||||
|
|
||||||
line = self.prompt_session.prompt(
|
line = self.prompt_session.prompt(
|
||||||
show,
|
show,
|
||||||
default=default,
|
default=default,
|
||||||
@@ -517,6 +584,7 @@ class InputOutput:
|
|||||||
style=style,
|
style=style,
|
||||||
key_bindings=kb,
|
key_bindings=kb,
|
||||||
complete_while_typing=True,
|
complete_while_typing=True,
|
||||||
|
prompt_continuation=get_continuation,
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
line = input(show)
|
line = input(show)
|
||||||
@@ -652,6 +720,7 @@ class InputOutput:
|
|||||||
return True
|
return True
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
@restore_multiline
|
||||||
def confirm_ask(
|
def confirm_ask(
|
||||||
self,
|
self,
|
||||||
question,
|
question,
|
||||||
@@ -661,11 +730,11 @@ class InputOutput:
|
|||||||
group=None,
|
group=None,
|
||||||
allow_never=False,
|
allow_never=False,
|
||||||
):
|
):
|
||||||
# Temporarily disable multiline mode for yes/no prompts
|
|
||||||
orig_multiline = self.multiline_mode
|
|
||||||
self.multiline_mode = False
|
|
||||||
self.num_user_asks += 1
|
self.num_user_asks += 1
|
||||||
|
|
||||||
|
# Ring the bell if needed
|
||||||
|
self.ring_bell()
|
||||||
|
|
||||||
question_id = (question, subject)
|
question_id = (question, subject)
|
||||||
|
|
||||||
if question_id in self.never_prompts:
|
if question_id in self.never_prompts:
|
||||||
@@ -676,19 +745,22 @@ class InputOutput:
|
|||||||
if group:
|
if group:
|
||||||
allow_never = True
|
allow_never = True
|
||||||
|
|
||||||
valid_responses = ["yes", "no"]
|
valid_responses = ["yes", "no", "skip", "all"]
|
||||||
options = " (Y)es/(N)o"
|
options = " (Y)es/(N)o"
|
||||||
if group:
|
if group:
|
||||||
if not explicit_yes_required:
|
if not explicit_yes_required:
|
||||||
options += "/(A)ll"
|
options += "/(A)ll"
|
||||||
valid_responses.append("all")
|
|
||||||
options += "/(S)kip all"
|
options += "/(S)kip all"
|
||||||
valid_responses.append("skip")
|
|
||||||
if allow_never:
|
if allow_never:
|
||||||
options += "/(D)on't ask again"
|
options += "/(D)on't ask again"
|
||||||
valid_responses.append("don't")
|
valid_responses.append("don't")
|
||||||
|
|
||||||
question += options + " [Yes]: "
|
if default.lower().startswith("y"):
|
||||||
|
question += options + " [Yes]: "
|
||||||
|
elif default.lower().startswith("n"):
|
||||||
|
question += options + " [No]: "
|
||||||
|
else:
|
||||||
|
question += options + f" [{default}]: "
|
||||||
|
|
||||||
if subject:
|
if subject:
|
||||||
self.tool_output()
|
self.tool_output()
|
||||||
@@ -717,17 +789,22 @@ class InputOutput:
|
|||||||
self.user_input(f"{question}{res}", log_only=False)
|
self.user_input(f"{question}{res}", log_only=False)
|
||||||
else:
|
else:
|
||||||
while True:
|
while True:
|
||||||
if self.prompt_session:
|
try:
|
||||||
res = self.prompt_session.prompt(
|
if self.prompt_session:
|
||||||
question,
|
res = self.prompt_session.prompt(
|
||||||
style=style,
|
question,
|
||||||
complete_while_typing=False,
|
style=style,
|
||||||
)
|
complete_while_typing=False,
|
||||||
else:
|
)
|
||||||
res = input(question)
|
else:
|
||||||
|
res = input(question)
|
||||||
|
except EOFError:
|
||||||
|
# Treat EOF (Ctrl+D) as if the user pressed Enter
|
||||||
|
res = default
|
||||||
|
break
|
||||||
|
|
||||||
if not res:
|
if not res:
|
||||||
res = "y" # Default to Yes if no input
|
res = default
|
||||||
break
|
break
|
||||||
res = res.lower()
|
res = res.lower()
|
||||||
good = any(valid_response.startswith(res) for valid_response in valid_responses)
|
good = any(valid_response.startswith(res) for valid_response in valid_responses)
|
||||||
@@ -762,17 +839,15 @@ class InputOutput:
|
|||||||
hist = f"{question.strip()} {res}"
|
hist = f"{question.strip()} {res}"
|
||||||
self.append_chat_history(hist, linebreak=True, blockquote=True)
|
self.append_chat_history(hist, linebreak=True, blockquote=True)
|
||||||
|
|
||||||
# Restore original multiline mode
|
|
||||||
self.multiline_mode = orig_multiline
|
|
||||||
|
|
||||||
return is_yes
|
return is_yes
|
||||||
|
|
||||||
|
@restore_multiline
|
||||||
def prompt_ask(self, question, default="", subject=None):
|
def prompt_ask(self, question, default="", subject=None):
|
||||||
# Temporarily disable multiline mode for prompts
|
|
||||||
orig_multiline = self.multiline_mode
|
|
||||||
self.multiline_mode = False
|
|
||||||
self.num_user_asks += 1
|
self.num_user_asks += 1
|
||||||
|
|
||||||
|
# Ring the bell if needed
|
||||||
|
self.ring_bell()
|
||||||
|
|
||||||
if subject:
|
if subject:
|
||||||
self.tool_output()
|
self.tool_output()
|
||||||
self.tool_output(subject, bold=True)
|
self.tool_output(subject, bold=True)
|
||||||
@@ -784,24 +859,25 @@ class InputOutput:
|
|||||||
elif self.yes is False:
|
elif self.yes is False:
|
||||||
res = "no"
|
res = "no"
|
||||||
else:
|
else:
|
||||||
if self.prompt_session:
|
try:
|
||||||
res = self.prompt_session.prompt(
|
if self.prompt_session:
|
||||||
question + " ",
|
res = self.prompt_session.prompt(
|
||||||
default=default,
|
question + " ",
|
||||||
style=style,
|
default=default,
|
||||||
complete_while_typing=True,
|
style=style,
|
||||||
)
|
complete_while_typing=True,
|
||||||
else:
|
)
|
||||||
res = input(question + " ")
|
else:
|
||||||
|
res = input(question + " ")
|
||||||
|
except EOFError:
|
||||||
|
# Treat EOF (Ctrl+D) as if the user pressed Enter
|
||||||
|
res = default
|
||||||
|
|
||||||
hist = f"{question.strip()} {res.strip()}"
|
hist = f"{question.strip()} {res.strip()}"
|
||||||
self.append_chat_history(hist, linebreak=True, blockquote=True)
|
self.append_chat_history(hist, linebreak=True, blockquote=True)
|
||||||
if self.yes in (True, False):
|
if self.yes in (True, False):
|
||||||
self.tool_output(hist)
|
self.tool_output(hist)
|
||||||
|
|
||||||
# Restore original multiline mode
|
|
||||||
self.multiline_mode = orig_multiline
|
|
||||||
|
|
||||||
return res
|
return res
|
||||||
|
|
||||||
def _tool_message(self, message="", strip=True, color=None):
|
def _tool_message(self, message="", strip=True, color=None):
|
||||||
@@ -813,13 +889,16 @@ class InputOutput:
|
|||||||
hist = message.strip() if strip else message
|
hist = message.strip() if strip else message
|
||||||
self.append_chat_history(hist, linebreak=True, blockquote=True)
|
self.append_chat_history(hist, linebreak=True, blockquote=True)
|
||||||
|
|
||||||
message = Text(message)
|
if not isinstance(message, Text):
|
||||||
|
message = Text(message)
|
||||||
style = dict(style=color) if self.pretty and color else dict()
|
style = dict(style=color) if self.pretty and color else dict()
|
||||||
try:
|
try:
|
||||||
self.console.print(message, **style)
|
self.console.print(message, **style)
|
||||||
except UnicodeEncodeError:
|
except UnicodeEncodeError:
|
||||||
# Fallback to ASCII-safe output
|
# Fallback to ASCII-safe output
|
||||||
message = message.encode("ascii", errors="replace").decode("ascii")
|
if isinstance(message, Text):
|
||||||
|
message = message.plain
|
||||||
|
message = str(message).encode("ascii", errors="replace").decode("ascii")
|
||||||
self.console.print(message, **style)
|
self.console.print(message, **style)
|
||||||
|
|
||||||
def tool_error(self, message="", strip=True):
|
def tool_error(self, message="", strip=True):
|
||||||
@@ -854,6 +933,10 @@ class InputOutput:
|
|||||||
return mdStream
|
return mdStream
|
||||||
|
|
||||||
def assistant_output(self, message, pretty=None):
|
def assistant_output(self, message, pretty=None):
|
||||||
|
if not message:
|
||||||
|
self.tool_warning("Empty response received from LLM. Check your provider account?")
|
||||||
|
return
|
||||||
|
|
||||||
show_resp = message
|
show_resp = message
|
||||||
|
|
||||||
# Coder will force pretty off if fence is not triple-backticks
|
# Coder will force pretty off if fence is not triple-backticks
|
||||||
@@ -865,7 +948,7 @@ class InputOutput:
|
|||||||
message, style=self.assistant_output_color, code_theme=self.code_theme
|
message, style=self.assistant_output_color, code_theme=self.code_theme
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
show_resp = Text(message or "<no response>")
|
show_resp = Text(message or "(empty response)")
|
||||||
|
|
||||||
self.console.print(show_resp)
|
self.console.print(show_resp)
|
||||||
|
|
||||||
@@ -876,6 +959,61 @@ class InputOutput:
|
|||||||
def print(self, message=""):
|
def print(self, message=""):
|
||||||
print(message)
|
print(message)
|
||||||
|
|
||||||
|
def llm_started(self):
|
||||||
|
"""Mark that the LLM has started processing, so we should ring the bell on next input"""
|
||||||
|
self.bell_on_next_input = True
|
||||||
|
|
||||||
|
def get_default_notification_command(self):
|
||||||
|
"""Return a default notification command based on the operating system."""
|
||||||
|
import platform
|
||||||
|
|
||||||
|
system = platform.system()
|
||||||
|
|
||||||
|
if system == "Darwin": # macOS
|
||||||
|
# Check for terminal-notifier first
|
||||||
|
if shutil.which("terminal-notifier"):
|
||||||
|
return f"terminal-notifier -title 'Aider' -message '{NOTIFICATION_MESSAGE}'"
|
||||||
|
# Fall back to osascript
|
||||||
|
return (
|
||||||
|
f'osascript -e \'display notification "{NOTIFICATION_MESSAGE}" with title "Aider"\''
|
||||||
|
)
|
||||||
|
elif system == "Linux":
|
||||||
|
# Check for common Linux notification tools
|
||||||
|
for cmd in ["notify-send", "zenity"]:
|
||||||
|
if shutil.which(cmd):
|
||||||
|
if cmd == "notify-send":
|
||||||
|
return f"notify-send 'Aider' '{NOTIFICATION_MESSAGE}'"
|
||||||
|
elif cmd == "zenity":
|
||||||
|
return f"zenity --notification --text='{NOTIFICATION_MESSAGE}'"
|
||||||
|
return None # No known notification tool found
|
||||||
|
elif system == "Windows":
|
||||||
|
# PowerShell notification
|
||||||
|
return (
|
||||||
|
"powershell -command"
|
||||||
|
" \"[System.Reflection.Assembly]::LoadWithPartialName('System.Windows.Forms');"
|
||||||
|
f" [System.Windows.Forms.MessageBox]::Show('{NOTIFICATION_MESSAGE}',"
|
||||||
|
" 'Aider')\""
|
||||||
|
)
|
||||||
|
|
||||||
|
return None # Unknown system
|
||||||
|
|
||||||
|
def ring_bell(self):
|
||||||
|
"""Ring the terminal bell if needed and clear the flag"""
|
||||||
|
if self.bell_on_next_input and self.notifications:
|
||||||
|
if self.notifications_command:
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
self.notifications_command, shell=True, capture_output=True
|
||||||
|
)
|
||||||
|
if result.returncode != 0 and result.stderr:
|
||||||
|
error_msg = result.stderr.decode("utf-8", errors="replace")
|
||||||
|
self.tool_warning(f"Failed to run notifications command: {error_msg}")
|
||||||
|
except Exception as e:
|
||||||
|
self.tool_warning(f"Failed to run notifications command: {e}")
|
||||||
|
else:
|
||||||
|
print("\a", end="", flush=True) # Ring the bell
|
||||||
|
self.bell_on_next_input = False # Clear the flag
|
||||||
|
|
||||||
def toggle_multiline_mode(self):
|
def toggle_multiline_mode(self):
|
||||||
"""Toggle between normal and multiline input modes"""
|
"""Toggle between normal and multiline input modes"""
|
||||||
self.multiline_mode = not self.multiline_mode
|
self.multiline_mode = not self.multiline_mode
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ from dataclasses import dataclass
|
|||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
from grep_ast import TreeContext, filename_to_lang
|
from grep_ast import TreeContext, filename_to_lang
|
||||||
from tree_sitter_languages import get_parser # noqa: E402
|
from grep_ast.tsl import get_parser # noqa: E402
|
||||||
|
|
||||||
from aider.dump import dump # noqa: F401
|
from aider.dump import dump # noqa: F401
|
||||||
from aider.run_cmd import run_cmd_subprocess # noqa: F401
|
from aider.run_cmd import run_cmd_subprocess # noqa: F401
|
||||||
|
|||||||
@@ -2,6 +2,8 @@ import importlib
|
|||||||
import os
|
import os
|
||||||
import warnings
|
import warnings
|
||||||
|
|
||||||
|
from aider.dump import dump # noqa: F401
|
||||||
|
|
||||||
warnings.filterwarnings("ignore", category=UserWarning, module="pydantic")
|
warnings.filterwarnings("ignore", category=UserWarning, module="pydantic")
|
||||||
|
|
||||||
AIDER_SITE_URL = "https://aider.chat"
|
AIDER_SITE_URL = "https://aider.chat"
|
||||||
|
|||||||
126
aider/main.py
126
aider/main.py
@@ -1,4 +1,3 @@
|
|||||||
import configparser
|
|
||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
@@ -25,6 +24,7 @@ from aider.coders import Coder
|
|||||||
from aider.coders.base_coder import UnknownEditFormat
|
from aider.coders.base_coder import UnknownEditFormat
|
||||||
from aider.commands import Commands, SwitchCoder
|
from aider.commands import Commands, SwitchCoder
|
||||||
from aider.copypaste import ClipboardWatcher
|
from aider.copypaste import ClipboardWatcher
|
||||||
|
from aider.deprecated import handle_deprecated_model_args
|
||||||
from aider.format_settings import format_settings, scrub_sensitive_info
|
from aider.format_settings import format_settings, scrub_sensitive_info
|
||||||
from aider.history import ChatSummary
|
from aider.history import ChatSummary
|
||||||
from aider.io import InputOutput
|
from aider.io import InputOutput
|
||||||
@@ -126,17 +126,8 @@ def setup_git(git_root, io):
|
|||||||
if not repo:
|
if not repo:
|
||||||
return
|
return
|
||||||
|
|
||||||
user_name = None
|
user_name = repo.git.config("--default", "", "--get", "user.name") or None
|
||||||
user_email = None
|
user_email = repo.git.config("--default", "", "--get", "user.email") or None
|
||||||
with repo.config_reader() as config:
|
|
||||||
try:
|
|
||||||
user_name = config.get_value("user", "name", None)
|
|
||||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
|
||||||
pass
|
|
||||||
try:
|
|
||||||
user_email = config.get_value("user", "email", None)
|
|
||||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
|
||||||
pass
|
|
||||||
|
|
||||||
if user_name and user_email:
|
if user_name and user_email:
|
||||||
return repo.working_tree_dir
|
return repo.working_tree_dir
|
||||||
@@ -158,40 +149,39 @@ def check_gitignore(git_root, io, ask=True):
|
|||||||
|
|
||||||
try:
|
try:
|
||||||
repo = git.Repo(git_root)
|
repo = git.Repo(git_root)
|
||||||
if repo.ignored(".aider") and repo.ignored(".env"):
|
patterns_to_add = []
|
||||||
|
|
||||||
|
if not repo.ignored(".aider"):
|
||||||
|
patterns_to_add.append(".aider*")
|
||||||
|
|
||||||
|
env_path = Path(git_root) / ".env"
|
||||||
|
if env_path.exists() and not repo.ignored(".env"):
|
||||||
|
patterns_to_add.append(".env")
|
||||||
|
|
||||||
|
if not patterns_to_add:
|
||||||
return
|
return
|
||||||
except ANY_GIT_ERROR:
|
|
||||||
pass
|
|
||||||
|
|
||||||
patterns = [".aider*", ".env"]
|
gitignore_file = Path(git_root) / ".gitignore"
|
||||||
patterns_to_add = []
|
if gitignore_file.exists():
|
||||||
|
try:
|
||||||
gitignore_file = Path(git_root) / ".gitignore"
|
content = io.read_text(gitignore_file)
|
||||||
if gitignore_file.exists():
|
if content is None:
|
||||||
try:
|
return
|
||||||
content = io.read_text(gitignore_file)
|
if not content.endswith("\n"):
|
||||||
if content is None:
|
content += "\n"
|
||||||
|
except OSError as e:
|
||||||
|
io.tool_error(f"Error when trying to read {gitignore_file}: {e}")
|
||||||
return
|
return
|
||||||
existing_lines = content.splitlines()
|
else:
|
||||||
for pat in patterns:
|
content = ""
|
||||||
if pat not in existing_lines:
|
except ANY_GIT_ERROR:
|
||||||
if "*" in pat or (Path(git_root) / pat).exists():
|
return
|
||||||
patterns_to_add.append(pat)
|
|
||||||
except OSError as e:
|
if ask:
|
||||||
io.tool_error(f"Error when trying to read {gitignore_file}: {e}")
|
io.tool_output("You can skip this check with --no-gitignore")
|
||||||
|
if not io.confirm_ask(f"Add {', '.join(patterns_to_add)} to .gitignore (recommended)?"):
|
||||||
return
|
return
|
||||||
else:
|
|
||||||
content = ""
|
|
||||||
patterns_to_add = patterns
|
|
||||||
|
|
||||||
if not patterns_to_add:
|
|
||||||
return
|
|
||||||
|
|
||||||
if ask and not io.confirm_ask(f"Add {', '.join(patterns_to_add)} to .gitignore (recommended)?"):
|
|
||||||
return
|
|
||||||
|
|
||||||
if content and not content.endswith("\n"):
|
|
||||||
content += "\n"
|
|
||||||
content += "\n".join(patterns_to_add) + "\n"
|
content += "\n".join(patterns_to_add) + "\n"
|
||||||
|
|
||||||
try:
|
try:
|
||||||
@@ -508,10 +498,11 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
|
|||||||
litellm._load_litellm()
|
litellm._load_litellm()
|
||||||
litellm._lazy_module.client_session = httpx.Client(verify=False)
|
litellm._lazy_module.client_session = httpx.Client(verify=False)
|
||||||
litellm._lazy_module.aclient_session = httpx.AsyncClient(verify=False)
|
litellm._lazy_module.aclient_session = httpx.AsyncClient(verify=False)
|
||||||
|
# Set verify_ssl on the model_info_manager
|
||||||
|
models.model_info_manager.set_verify_ssl(False)
|
||||||
|
|
||||||
if args.timeout:
|
if args.timeout:
|
||||||
litellm._load_litellm()
|
models.request_timeout = args.timeout
|
||||||
litellm._lazy_module.request_timeout = args.timeout
|
|
||||||
|
|
||||||
if args.dark_mode:
|
if args.dark_mode:
|
||||||
args.user_input_color = "#32FF32"
|
args.user_input_color = "#32FF32"
|
||||||
@@ -552,10 +543,13 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
|
|||||||
code_theme=args.code_theme,
|
code_theme=args.code_theme,
|
||||||
dry_run=args.dry_run,
|
dry_run=args.dry_run,
|
||||||
encoding=args.encoding,
|
encoding=args.encoding,
|
||||||
|
line_endings=args.line_endings,
|
||||||
llm_history_file=args.llm_history_file,
|
llm_history_file=args.llm_history_file,
|
||||||
editingmode=editing_mode,
|
editingmode=editing_mode,
|
||||||
fancy_input=args.fancy_input,
|
fancy_input=args.fancy_input,
|
||||||
multiline_mode=args.multiline,
|
multiline_mode=args.multiline,
|
||||||
|
notifications=args.notifications,
|
||||||
|
notifications_command=args.notifications_command,
|
||||||
)
|
)
|
||||||
|
|
||||||
io = get_io(args.pretty)
|
io = get_io(args.pretty)
|
||||||
@@ -595,6 +589,9 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
|
|||||||
|
|
||||||
if args.openai_api_key:
|
if args.openai_api_key:
|
||||||
os.environ["OPENAI_API_KEY"] = args.openai_api_key
|
os.environ["OPENAI_API_KEY"] = args.openai_api_key
|
||||||
|
|
||||||
|
# Handle deprecated model shortcut args
|
||||||
|
handle_deprecated_model_args(args, io)
|
||||||
if args.openai_api_base:
|
if args.openai_api_base:
|
||||||
os.environ["OPENAI_API_BASE"] = args.openai_api_base
|
os.environ["OPENAI_API_BASE"] = args.openai_api_base
|
||||||
if args.openai_api_version:
|
if args.openai_api_version:
|
||||||
@@ -748,9 +745,26 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
|
|||||||
models.MODEL_ALIASES[alias.strip()] = model.strip()
|
models.MODEL_ALIASES[alias.strip()] = model.strip()
|
||||||
|
|
||||||
if not args.model:
|
if not args.model:
|
||||||
args.model = "gpt-4o-2024-08-06"
|
# Select model based on available API keys
|
||||||
if os.environ.get("ANTHROPIC_API_KEY"):
|
model_key_pairs = [
|
||||||
args.model = "claude-3-5-sonnet-20241022"
|
("ANTHROPIC_API_KEY", "sonnet"),
|
||||||
|
("DEEPSEEK_API_KEY", "deepseek"),
|
||||||
|
("OPENROUTER_API_KEY", "openrouter/anthropic/claude-3.7-sonnet"),
|
||||||
|
("OPENAI_API_KEY", "gpt-4o"),
|
||||||
|
("GEMINI_API_KEY", "flash"),
|
||||||
|
]
|
||||||
|
|
||||||
|
for env_key, model_name in model_key_pairs:
|
||||||
|
if os.environ.get(env_key):
|
||||||
|
args.model = model_name
|
||||||
|
io.tool_warning(
|
||||||
|
f"Found {env_key} so using {model_name} since no --model was specified."
|
||||||
|
)
|
||||||
|
break
|
||||||
|
if not args.model:
|
||||||
|
io.tool_error("You need to specify a --model and an --api-key to use.")
|
||||||
|
io.offer_url(urls.models_and_keys, "Open documentation url for more info?")
|
||||||
|
return 1
|
||||||
|
|
||||||
main_model = models.Model(
|
main_model = models.Model(
|
||||||
args.model,
|
args.model,
|
||||||
@@ -759,6 +773,20 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
|
|||||||
editor_edit_format=args.editor_edit_format,
|
editor_edit_format=args.editor_edit_format,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Check if deprecated remove_reasoning is set
|
||||||
|
if main_model.remove_reasoning is not None:
|
||||||
|
io.tool_warning(
|
||||||
|
"Model setting 'remove_reasoning' is deprecated, please use 'reasoning_tag' instead."
|
||||||
|
)
|
||||||
|
|
||||||
|
# Set reasoning effort if specified
|
||||||
|
if args.reasoning_effort is not None:
|
||||||
|
main_model.set_reasoning_effort(args.reasoning_effort)
|
||||||
|
|
||||||
|
# Set thinking tokens if specified
|
||||||
|
if args.thinking_tokens is not None:
|
||||||
|
main_model.set_thinking_tokens(args.thinking_tokens)
|
||||||
|
|
||||||
if args.copy_paste and args.edit_format is None:
|
if args.copy_paste and args.edit_format is None:
|
||||||
if main_model.edit_format in ("diff", "whole"):
|
if main_model.edit_format in ("diff", "whole"):
|
||||||
main_model.edit_format = "editor-" + main_model.edit_format
|
main_model.edit_format = "editor-" + main_model.edit_format
|
||||||
@@ -966,6 +994,9 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
|
|||||||
analytics.event("exit", reason="Failed to read apply content")
|
analytics.event("exit", reason="Failed to read apply content")
|
||||||
return
|
return
|
||||||
coder.partial_response_content = content
|
coder.partial_response_content = content
|
||||||
|
# For testing #2879
|
||||||
|
# from aider.coders.base_coder import all_fences
|
||||||
|
# coder.fence = all_fences[1]
|
||||||
coder.apply_updates()
|
coder.apply_updates()
|
||||||
analytics.event("exit", reason="Applied updates")
|
analytics.event("exit", reason="Applied updates")
|
||||||
return
|
return
|
||||||
@@ -1033,10 +1064,13 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
|
|||||||
|
|
||||||
while True:
|
while True:
|
||||||
try:
|
try:
|
||||||
|
coder.ok_to_warm_cache = bool(args.cache_keepalive_pings)
|
||||||
coder.run()
|
coder.run()
|
||||||
analytics.event("exit", reason="Completed main CLI coder.run")
|
analytics.event("exit", reason="Completed main CLI coder.run")
|
||||||
return
|
return
|
||||||
except SwitchCoder as switch:
|
except SwitchCoder as switch:
|
||||||
|
coder.ok_to_warm_cache = False
|
||||||
|
|
||||||
kwargs = dict(io=io, from_coder=coder)
|
kwargs = dict(io=io, from_coder=coder)
|
||||||
kwargs.update(switch.kwargs)
|
kwargs.update(switch.kwargs)
|
||||||
if "show_announcements" in kwargs:
|
if "show_announcements" in kwargs:
|
||||||
|
|||||||
1034
aider/models.py
1034
aider/models.py
File diff suppressed because it is too large
Load Diff
26
aider/queries/tree-sitter-language-pack/csharp-tags.scm
Normal file
26
aider/queries/tree-sitter-language-pack/csharp-tags.scm
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
; Based on https://github.com/tree-sitter/tree-sitter-c-sharp/blob/master/queries/tags.scm
|
||||||
|
; MIT License.
|
||||||
|
|
||||||
|
(class_declaration name: (identifier) @name.definition.class) @definition.class
|
||||||
|
|
||||||
|
(class_declaration (base_list (_) @name.reference.class)) @reference.class
|
||||||
|
|
||||||
|
(interface_declaration name: (identifier) @name.definition.interface) @definition.interface
|
||||||
|
|
||||||
|
(interface_declaration (base_list (_) @name.reference.interface)) @reference.interface
|
||||||
|
|
||||||
|
(method_declaration name: (identifier) @name.definition.method) @definition.method
|
||||||
|
|
||||||
|
(object_creation_expression type: (identifier) @name.reference.class) @reference.class
|
||||||
|
|
||||||
|
(type_parameter_constraints_clause (identifier) @name.reference.class) @reference.class
|
||||||
|
|
||||||
|
(type_parameter_constraint (type type: (identifier) @name.reference.class)) @reference.class
|
||||||
|
|
||||||
|
(variable_declaration type: (identifier) @name.reference.class) @reference.class
|
||||||
|
|
||||||
|
(invocation_expression function: (member_access_expression name: (identifier) @name.reference.send)) @reference.send
|
||||||
|
|
||||||
|
(namespace_declaration name: (identifier) @name.definition.module) @definition.module
|
||||||
|
|
||||||
|
(namespace_declaration name: (identifier) @name.definition.module) @module
|
||||||
88
aider/queries/tree-sitter-language-pack/javascript-tags.scm
Normal file
88
aider/queries/tree-sitter-language-pack/javascript-tags.scm
Normal file
@@ -0,0 +1,88 @@
|
|||||||
|
(
|
||||||
|
(comment)* @doc
|
||||||
|
.
|
||||||
|
(method_definition
|
||||||
|
name: (property_identifier) @name.definition.method) @definition.method
|
||||||
|
(#not-eq? @name.definition.method "constructor")
|
||||||
|
(#strip! @doc "^[\\s\\*/]+|^[\\s\\*/]$")
|
||||||
|
(#select-adjacent! @doc @definition.method)
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
(comment)* @doc
|
||||||
|
.
|
||||||
|
[
|
||||||
|
(class
|
||||||
|
name: (_) @name.definition.class)
|
||||||
|
(class_declaration
|
||||||
|
name: (_) @name.definition.class)
|
||||||
|
] @definition.class
|
||||||
|
(#strip! @doc "^[\\s\\*/]+|^[\\s\\*/]$")
|
||||||
|
(#select-adjacent! @doc @definition.class)
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
(comment)* @doc
|
||||||
|
.
|
||||||
|
[
|
||||||
|
(function_expression
|
||||||
|
name: (identifier) @name.definition.function)
|
||||||
|
(function_declaration
|
||||||
|
name: (identifier) @name.definition.function)
|
||||||
|
(generator_function
|
||||||
|
name: (identifier) @name.definition.function)
|
||||||
|
(generator_function_declaration
|
||||||
|
name: (identifier) @name.definition.function)
|
||||||
|
] @definition.function
|
||||||
|
(#strip! @doc "^[\\s\\*/]+|^[\\s\\*/]$")
|
||||||
|
(#select-adjacent! @doc @definition.function)
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
(comment)* @doc
|
||||||
|
.
|
||||||
|
(lexical_declaration
|
||||||
|
(variable_declarator
|
||||||
|
name: (identifier) @name.definition.function
|
||||||
|
value: [(arrow_function) (function_expression)]) @definition.function)
|
||||||
|
(#strip! @doc "^[\\s\\*/]+|^[\\s\\*/]$")
|
||||||
|
(#select-adjacent! @doc @definition.function)
|
||||||
|
)
|
||||||
|
|
||||||
|
(
|
||||||
|
(comment)* @doc
|
||||||
|
.
|
||||||
|
(variable_declaration
|
||||||
|
(variable_declarator
|
||||||
|
name: (identifier) @name.definition.function
|
||||||
|
value: [(arrow_function) (function_expression)]) @definition.function)
|
||||||
|
(#strip! @doc "^[\\s\\*/]+|^[\\s\\*/]$")
|
||||||
|
(#select-adjacent! @doc @definition.function)
|
||||||
|
)
|
||||||
|
|
||||||
|
(assignment_expression
|
||||||
|
left: [
|
||||||
|
(identifier) @name.definition.function
|
||||||
|
(member_expression
|
||||||
|
property: (property_identifier) @name.definition.function)
|
||||||
|
]
|
||||||
|
right: [(arrow_function) (function_expression)]
|
||||||
|
) @definition.function
|
||||||
|
|
||||||
|
(pair
|
||||||
|
key: (property_identifier) @name.definition.function
|
||||||
|
value: [(arrow_function) (function_expression)]) @definition.function
|
||||||
|
|
||||||
|
(
|
||||||
|
(call_expression
|
||||||
|
function: (identifier) @name.reference.call) @reference.call
|
||||||
|
(#not-match? @name.reference.call "^(require)$")
|
||||||
|
)
|
||||||
|
|
||||||
|
(call_expression
|
||||||
|
function: (member_expression
|
||||||
|
property: (property_identifier) @name.reference.call)
|
||||||
|
arguments: (_) @reference.call)
|
||||||
|
|
||||||
|
(new_expression
|
||||||
|
constructor: (_) @name.reference.class) @reference.class
|
||||||
77
aider/queries/tree-sitter-languages/hcl-tags.scm
Normal file
77
aider/queries/tree-sitter-languages/hcl-tags.scm
Normal file
@@ -0,0 +1,77 @@
|
|||||||
|
;; Based on https://github.com/tree-sitter-grammars/tree-sitter-hcl/blob/main/make_grammar.js
|
||||||
|
;; Which has Apache 2.0 License
|
||||||
|
;; tags.scm for Terraform (tree-sitter-hcl)
|
||||||
|
|
||||||
|
; === Definitions: Terraform Blocks ===
|
||||||
|
(block
|
||||||
|
(identifier) @block_type
|
||||||
|
(string_lit (template_literal) @resource_type)
|
||||||
|
(string_lit (template_literal) @name.definition.resource)
|
||||||
|
(body) @definition.resource
|
||||||
|
) (#eq? @block_type "resource")
|
||||||
|
|
||||||
|
(block
|
||||||
|
(identifier) @block_type
|
||||||
|
(string_lit (template_literal) @name.definition.module)
|
||||||
|
(body) @definition.module
|
||||||
|
) (#eq? @block_type "module")
|
||||||
|
|
||||||
|
(block
|
||||||
|
(identifier) @block_type
|
||||||
|
(string_lit (template_literal) @name.definition.variable)
|
||||||
|
(body) @definition.variable
|
||||||
|
) (#eq? @block_type "variable")
|
||||||
|
|
||||||
|
(block
|
||||||
|
(identifier) @block_type
|
||||||
|
(string_lit (template_literal) @name.definition.output)
|
||||||
|
(body) @definition.output
|
||||||
|
) (#eq? @block_type "output")
|
||||||
|
|
||||||
|
(block
|
||||||
|
(identifier) @block_type
|
||||||
|
(string_lit (template_literal) @name.definition.provider)
|
||||||
|
(body) @definition.provider
|
||||||
|
) (#eq? @block_type "provider")
|
||||||
|
|
||||||
|
(block
|
||||||
|
(identifier) @block_type
|
||||||
|
(body
|
||||||
|
(attribute
|
||||||
|
(identifier) @name.definition.local
|
||||||
|
(expression) @definition.local
|
||||||
|
)+
|
||||||
|
)
|
||||||
|
) (#eq? @block_type "locals")
|
||||||
|
|
||||||
|
; === References: Variables, Locals, Modules, Data, Resources ===
|
||||||
|
((variable_expr) @ref_type
|
||||||
|
(get_attr (identifier) @name.reference.variable)
|
||||||
|
) @reference.variable
|
||||||
|
(#eq? @ref_type "var")
|
||||||
|
|
||||||
|
((variable_expr) @ref_type
|
||||||
|
(get_attr (identifier) @name.reference.local)
|
||||||
|
) @reference.local
|
||||||
|
(#eq? @ref_type "local")
|
||||||
|
|
||||||
|
((variable_expr) @ref_type
|
||||||
|
(get_attr (identifier) @name.reference.module)
|
||||||
|
) @reference.module
|
||||||
|
(#eq? @ref_type "module")
|
||||||
|
|
||||||
|
((variable_expr) @ref_type
|
||||||
|
(get_attr (identifier) @data_source_type)
|
||||||
|
(get_attr (identifier) @name.reference.data)
|
||||||
|
) @reference.data
|
||||||
|
(#eq? @ref_type "data")
|
||||||
|
|
||||||
|
((variable_expr) @resource_type
|
||||||
|
(get_attr (identifier) @name.reference.resource)
|
||||||
|
) @reference.resource
|
||||||
|
(#not-eq? @resource_type "var")
|
||||||
|
(#not-eq? @resource_type "local")
|
||||||
|
(#not-eq? @resource_type "module")
|
||||||
|
(#not-eq? @resource_type "data")
|
||||||
|
(#not-eq? @resource_type "provider")
|
||||||
|
(#not-eq? @resource_type "output")
|
||||||
27
aider/queries/tree-sitter-languages/kotlin-tags.scm
Normal file
27
aider/queries/tree-sitter-languages/kotlin-tags.scm
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
; Definitions
|
||||||
|
|
||||||
|
(class_declaration
|
||||||
|
(type_identifier) @name.definition.class) @definition.class
|
||||||
|
|
||||||
|
(function_declaration
|
||||||
|
(simple_identifier) @name.definition.function) @definition.function
|
||||||
|
|
||||||
|
(object_declaration
|
||||||
|
(type_identifier) @name.definition.object) @definition.object
|
||||||
|
|
||||||
|
; References
|
||||||
|
|
||||||
|
(call_expression
|
||||||
|
[
|
||||||
|
(simple_identifier) @name.reference.call
|
||||||
|
(navigation_expression
|
||||||
|
(navigation_suffix
|
||||||
|
(simple_identifier) @name.reference.call))
|
||||||
|
]) @reference.call
|
||||||
|
|
||||||
|
(delegation_specifier
|
||||||
|
[
|
||||||
|
(user_type) @name.reference.type
|
||||||
|
(constructor_invocation
|
||||||
|
(user_type) @name.reference.type)
|
||||||
|
]) @reference.type
|
||||||
82
aider/reasoning_tags.py
Normal file
82
aider/reasoning_tags.py
Normal file
@@ -0,0 +1,82 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
import re
|
||||||
|
|
||||||
|
from aider.dump import dump # noqa
|
||||||
|
|
||||||
|
# Standard tag identifier
|
||||||
|
REASONING_TAG = "thinking-content-" + "7bbeb8e1441453ad999a0bbba8a46d4b"
|
||||||
|
# Output formatting
|
||||||
|
REASONING_START = "--------------\n► **THINKING**"
|
||||||
|
REASONING_END = "------------\n► **ANSWER**"
|
||||||
|
|
||||||
|
|
||||||
|
def remove_reasoning_content(res, reasoning_tag):
|
||||||
|
"""
|
||||||
|
Remove reasoning content from text based on tags.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
res (str): The text to process
|
||||||
|
reasoning_tag (str): The tag name to remove
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
str: Text with reasoning content removed
|
||||||
|
"""
|
||||||
|
if not reasoning_tag:
|
||||||
|
return res
|
||||||
|
|
||||||
|
# Try to match the complete tag pattern first
|
||||||
|
pattern = f"<{reasoning_tag}>.*?</{reasoning_tag}>"
|
||||||
|
res = re.sub(pattern, "", res, flags=re.DOTALL).strip()
|
||||||
|
|
||||||
|
# If closing tag exists but opening tag might be missing, remove everything before closing
|
||||||
|
# tag
|
||||||
|
closing_tag = f"</{reasoning_tag}>"
|
||||||
|
if closing_tag in res:
|
||||||
|
# Split on the closing tag and keep everything after it
|
||||||
|
parts = res.split(closing_tag, 1)
|
||||||
|
res = parts[1].strip() if len(parts) > 1 else res
|
||||||
|
|
||||||
|
return res
|
||||||
|
|
||||||
|
|
||||||
|
def replace_reasoning_tags(text, tag_name):
|
||||||
|
"""
|
||||||
|
Replace opening and closing reasoning tags with standard formatting.
|
||||||
|
Ensures exactly one blank line before START and END markers.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
text (str): The text containing the tags
|
||||||
|
tag_name (str): The name of the tag to replace
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
str: Text with reasoning tags replaced with standard format
|
||||||
|
"""
|
||||||
|
if not text:
|
||||||
|
return text
|
||||||
|
|
||||||
|
# Replace opening tag with proper spacing
|
||||||
|
text = re.sub(f"\\s*<{tag_name}>\\s*", f"\n{REASONING_START}\n\n", text)
|
||||||
|
|
||||||
|
# Replace closing tag with proper spacing
|
||||||
|
text = re.sub(f"\\s*</{tag_name}>\\s*", f"\n\n{REASONING_END}\n\n", text)
|
||||||
|
|
||||||
|
return text
|
||||||
|
|
||||||
|
|
||||||
|
def format_reasoning_content(reasoning_content, tag_name):
|
||||||
|
"""
|
||||||
|
Format reasoning content with appropriate tags.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
reasoning_content (str): The content to format
|
||||||
|
tag_name (str): The tag name to use
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
str: Formatted reasoning content with tags
|
||||||
|
"""
|
||||||
|
if not reasoning_content:
|
||||||
|
return ""
|
||||||
|
|
||||||
|
formatted = f"<{tag_name}>\n\n{reasoning_content}\n\n</{tag_name}>"
|
||||||
|
return formatted
|
||||||
@@ -17,7 +17,6 @@ except ImportError:
|
|||||||
import pathspec
|
import pathspec
|
||||||
|
|
||||||
from aider import prompts, utils
|
from aider import prompts, utils
|
||||||
from aider.sendchat import simple_send_with_retries
|
|
||||||
|
|
||||||
from .dump import dump # noqa: F401
|
from .dump import dump # noqa: F401
|
||||||
|
|
||||||
@@ -29,6 +28,7 @@ ANY_GIT_ERROR += [
|
|||||||
ValueError,
|
ValueError,
|
||||||
AttributeError,
|
AttributeError,
|
||||||
AssertionError,
|
AssertionError,
|
||||||
|
TimeoutError,
|
||||||
]
|
]
|
||||||
ANY_GIT_ERROR = tuple(ANY_GIT_ERROR)
|
ANY_GIT_ERROR = tuple(ANY_GIT_ERROR)
|
||||||
|
|
||||||
@@ -145,7 +145,7 @@ class GitRepo:
|
|||||||
else:
|
else:
|
||||||
cmd += ["-a"]
|
cmd += ["-a"]
|
||||||
|
|
||||||
original_user_name = self.repo.config_reader().get_value("user", "name")
|
original_user_name = self.repo.git.config("--get", "user.name")
|
||||||
original_committer_name_env = os.environ.get("GIT_COMMITTER_NAME")
|
original_committer_name_env = os.environ.get("GIT_COMMITTER_NAME")
|
||||||
committer_name = f"{original_user_name} (aider)"
|
committer_name = f"{original_user_name} (aider)"
|
||||||
|
|
||||||
@@ -153,7 +153,7 @@ class GitRepo:
|
|||||||
os.environ["GIT_COMMITTER_NAME"] = committer_name
|
os.environ["GIT_COMMITTER_NAME"] = committer_name
|
||||||
|
|
||||||
if aider_edits and self.attribute_author:
|
if aider_edits and self.attribute_author:
|
||||||
original_auther_name_env = os.environ.get("GIT_AUTHOR_NAME")
|
original_author_name_env = os.environ.get("GIT_AUTHOR_NAME")
|
||||||
os.environ["GIT_AUTHOR_NAME"] = committer_name
|
os.environ["GIT_AUTHOR_NAME"] = committer_name
|
||||||
|
|
||||||
try:
|
try:
|
||||||
@@ -173,8 +173,8 @@ class GitRepo:
|
|||||||
del os.environ["GIT_COMMITTER_NAME"]
|
del os.environ["GIT_COMMITTER_NAME"]
|
||||||
|
|
||||||
if aider_edits and self.attribute_author:
|
if aider_edits and self.attribute_author:
|
||||||
if original_auther_name_env is not None:
|
if original_author_name_env is not None:
|
||||||
os.environ["GIT_AUTHOR_NAME"] = original_auther_name_env
|
os.environ["GIT_AUTHOR_NAME"] = original_author_name_env
|
||||||
else:
|
else:
|
||||||
del os.environ["GIT_AUTHOR_NAME"]
|
del os.environ["GIT_AUTHOR_NAME"]
|
||||||
|
|
||||||
@@ -204,7 +204,7 @@ class GitRepo:
|
|||||||
max_tokens = model.info.get("max_input_tokens") or 0
|
max_tokens = model.info.get("max_input_tokens") or 0
|
||||||
if max_tokens and num_tokens > max_tokens:
|
if max_tokens and num_tokens > max_tokens:
|
||||||
continue
|
continue
|
||||||
commit_message = simple_send_with_retries(model, messages)
|
commit_message = model.simple_send_with_retries(messages)
|
||||||
if commit_message:
|
if commit_message:
|
||||||
break
|
break
|
||||||
|
|
||||||
@@ -309,8 +309,11 @@ class GitRepo:
|
|||||||
|
|
||||||
# Add staged files
|
# Add staged files
|
||||||
index = self.repo.index
|
index = self.repo.index
|
||||||
staged_files = [path for path, _ in index.entries.keys()]
|
try:
|
||||||
files.update(self.normalize_path(path) for path in staged_files)
|
staged_files = [path for path, _ in index.entries.keys()]
|
||||||
|
files.update(self.normalize_path(path) for path in staged_files)
|
||||||
|
except ANY_GIT_ERROR as err:
|
||||||
|
self.io.tool_error(f"Unable to read staged files: {err}")
|
||||||
|
|
||||||
res = [fname for fname in files if not self.ignored_file(fname)]
|
res = [fname for fname in files if not self.ignored_file(fname)]
|
||||||
|
|
||||||
|
|||||||
@@ -23,7 +23,7 @@ from aider.utils import Spinner
|
|||||||
|
|
||||||
# tree_sitter is throwing a FutureWarning
|
# tree_sitter is throwing a FutureWarning
|
||||||
warnings.simplefilter("ignore", category=FutureWarning)
|
warnings.simplefilter("ignore", category=FutureWarning)
|
||||||
from tree_sitter_languages import get_language, get_parser # noqa: E402
|
from grep_ast.tsl import USING_TSL_PACK, get_language, get_parser # noqa: E402
|
||||||
|
|
||||||
Tag = namedtuple("Tag", "rel_fname fname line name kind".split())
|
Tag = namedtuple("Tag", "rel_fname fname line name kind".split())
|
||||||
|
|
||||||
@@ -31,8 +31,12 @@ Tag = namedtuple("Tag", "rel_fname fname line name kind".split())
|
|||||||
SQLITE_ERRORS = (sqlite3.OperationalError, sqlite3.DatabaseError, OSError)
|
SQLITE_ERRORS = (sqlite3.OperationalError, sqlite3.DatabaseError, OSError)
|
||||||
|
|
||||||
|
|
||||||
|
CACHE_VERSION = 3
|
||||||
|
if USING_TSL_PACK:
|
||||||
|
CACHE_VERSION = 4
|
||||||
|
|
||||||
|
|
||||||
class RepoMap:
|
class RepoMap:
|
||||||
CACHE_VERSION = 3
|
|
||||||
TAGS_CACHE_DIR = f".aider.tags.cache.v{CACHE_VERSION}"
|
TAGS_CACHE_DIR = f".aider.tags.cache.v{CACHE_VERSION}"
|
||||||
|
|
||||||
warned_files = set()
|
warned_files = set()
|
||||||
@@ -282,10 +286,15 @@ class RepoMap:
|
|||||||
query = language.query(query_scm)
|
query = language.query(query_scm)
|
||||||
captures = query.captures(tree.root_node)
|
captures = query.captures(tree.root_node)
|
||||||
|
|
||||||
captures = list(captures)
|
|
||||||
|
|
||||||
saw = set()
|
saw = set()
|
||||||
for node, tag in captures:
|
if USING_TSL_PACK:
|
||||||
|
all_nodes = []
|
||||||
|
for tag, nodes in captures.items():
|
||||||
|
all_nodes += [(node, tag) for node in nodes]
|
||||||
|
else:
|
||||||
|
all_nodes = list(captures)
|
||||||
|
|
||||||
|
for node, tag in all_nodes:
|
||||||
if tag.startswith("name.definition."):
|
if tag.startswith("name.definition."):
|
||||||
kind = "def"
|
kind = "def"
|
||||||
elif tag.startswith("name.reference."):
|
elif tag.startswith("name.reference."):
|
||||||
@@ -422,6 +431,15 @@ class RepoMap:
|
|||||||
|
|
||||||
G = nx.MultiDiGraph()
|
G = nx.MultiDiGraph()
|
||||||
|
|
||||||
|
# Add a small self-edge for every definition that has no references
|
||||||
|
# Helps with tree-sitter 0.23.2 with ruby, where "def greet(name)"
|
||||||
|
# isn't counted as a def AND a ref. tree-sitter 0.24.0 does.
|
||||||
|
for ident in defines.keys():
|
||||||
|
if ident in references:
|
||||||
|
continue
|
||||||
|
for definer in defines[ident]:
|
||||||
|
G.add_edge(definer, definer, weight=0.1, ident=ident)
|
||||||
|
|
||||||
for ident in idents:
|
for ident in idents:
|
||||||
if progress:
|
if progress:
|
||||||
progress()
|
progress()
|
||||||
@@ -605,7 +623,7 @@ class RepoMap:
|
|||||||
|
|
||||||
self.tree_cache = dict()
|
self.tree_cache = dict()
|
||||||
|
|
||||||
middle = min(max_map_tokens // 25, num_tags)
|
middle = min(int(max_map_tokens // 25), num_tags)
|
||||||
while lower_bound <= upper_bound:
|
while lower_bound <= upper_bound:
|
||||||
# dump(lower_bound, middle, upper_bound)
|
# dump(lower_bound, middle, upper_bound)
|
||||||
|
|
||||||
@@ -628,7 +646,7 @@ class RepoMap:
|
|||||||
else:
|
else:
|
||||||
upper_bound = middle - 1
|
upper_bound = middle - 1
|
||||||
|
|
||||||
middle = (lower_bound + upper_bound) // 2
|
middle = int((lower_bound + upper_bound) // 2)
|
||||||
|
|
||||||
spin.end()
|
spin.end()
|
||||||
return best_tree
|
return best_tree
|
||||||
@@ -732,8 +750,27 @@ def get_random_color():
|
|||||||
|
|
||||||
def get_scm_fname(lang):
|
def get_scm_fname(lang):
|
||||||
# Load the tags queries
|
# Load the tags queries
|
||||||
|
if USING_TSL_PACK:
|
||||||
|
subdir = "tree-sitter-language-pack"
|
||||||
|
try:
|
||||||
|
path = resources.files(__package__).joinpath(
|
||||||
|
"queries",
|
||||||
|
subdir,
|
||||||
|
f"{lang}-tags.scm",
|
||||||
|
)
|
||||||
|
if path.exists():
|
||||||
|
return path
|
||||||
|
except KeyError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Fall back to tree-sitter-languages
|
||||||
|
subdir = "tree-sitter-languages"
|
||||||
try:
|
try:
|
||||||
return resources.files(__package__).joinpath("queries", f"tree-sitter-{lang}-tags.scm")
|
return resources.files(__package__).joinpath(
|
||||||
|
"queries",
|
||||||
|
subdir,
|
||||||
|
f"{lang}-tags.scm",
|
||||||
|
)
|
||||||
except KeyError:
|
except KeyError:
|
||||||
return
|
return
|
||||||
|
|
||||||
|
|||||||
@@ -1,2 +1,244 @@
|
|||||||
{
|
{
|
||||||
|
"deepseek-reasoner": {
|
||||||
|
"max_tokens": 8192,
|
||||||
|
"max_input_tokens": 64000,
|
||||||
|
"max_output_tokens": 8192,
|
||||||
|
"input_cost_per_token": 0.00000055,
|
||||||
|
"input_cost_per_token_cache_hit": 0.00000014,
|
||||||
|
"cache_read_input_token_cost": 0.00000014,
|
||||||
|
"cache_creation_input_token_cost": 0.0,
|
||||||
|
"output_cost_per_token": 0.00000219,
|
||||||
|
"litellm_provider": "deepseek",
|
||||||
|
"mode": "chat",
|
||||||
|
//"supports_function_calling": true,
|
||||||
|
"supports_assistant_prefill": true,
|
||||||
|
//"supports_tool_choice": true,
|
||||||
|
"supports_prompt_caching": true
|
||||||
|
},
|
||||||
|
"openrouter/deepseek/deepseek-r1": {
|
||||||
|
"max_tokens": 8192,
|
||||||
|
"max_input_tokens": 64000,
|
||||||
|
"max_output_tokens": 8192,
|
||||||
|
"input_cost_per_token": 0.00000055,
|
||||||
|
"input_cost_per_token_cache_hit": 0.00000014,
|
||||||
|
"cache_read_input_token_cost": 0.00000014,
|
||||||
|
"cache_creation_input_token_cost": 0.0,
|
||||||
|
"output_cost_per_token": 0.00000219,
|
||||||
|
"litellm_provider": "openrouter",
|
||||||
|
"mode": "chat",
|
||||||
|
//"supports_function_calling": true,
|
||||||
|
"supports_assistant_prefill": true,
|
||||||
|
//"supports_tool_choice": true,
|
||||||
|
"supports_prompt_caching": true
|
||||||
|
},
|
||||||
|
"openrouter/deepseek/deepseek-r1:free": {
|
||||||
|
"max_tokens": 8192,
|
||||||
|
"max_input_tokens": 64000,
|
||||||
|
"max_output_tokens": 8192,
|
||||||
|
"input_cost_per_token": 0.0,
|
||||||
|
"input_cost_per_token_cache_hit": 0.0,
|
||||||
|
"cache_read_input_token_cost": 0.00,
|
||||||
|
"cache_creation_input_token_cost": 0.0,
|
||||||
|
"output_cost_per_token": 0.0,
|
||||||
|
"litellm_provider": "openrouter",
|
||||||
|
"mode": "chat",
|
||||||
|
//"supports_function_calling": true,
|
||||||
|
"supports_assistant_prefill": true,
|
||||||
|
//"supports_tool_choice": true,
|
||||||
|
"supports_prompt_caching": true
|
||||||
|
},
|
||||||
|
"openrouter/deepseek/deepseek-chat:free": {
|
||||||
|
"max_tokens": 8192,
|
||||||
|
"max_input_tokens": 64000,
|
||||||
|
"max_output_tokens": 8192,
|
||||||
|
"input_cost_per_token": 0.0,
|
||||||
|
"input_cost_per_token_cache_hit": 0.0,
|
||||||
|
"cache_read_input_token_cost": 0.00,
|
||||||
|
"cache_creation_input_token_cost": 0.0,
|
||||||
|
"output_cost_per_token": 0.0,
|
||||||
|
"litellm_provider": "openrouter",
|
||||||
|
"mode": "chat",
|
||||||
|
//"supports_function_calling": true,
|
||||||
|
"supports_assistant_prefill": true,
|
||||||
|
//"supports_tool_choice": true,
|
||||||
|
"supports_prompt_caching": true
|
||||||
|
},
|
||||||
|
"fireworks_ai/accounts/fireworks/models/deepseek-r1": {
|
||||||
|
"max_tokens": 160000,
|
||||||
|
"max_input_tokens": 128000,
|
||||||
|
"max_output_tokens": 20480,
|
||||||
|
"litellm_provider": "fireworks_ai",
|
||||||
|
"input_cost_per_token": 0.000008,
|
||||||
|
"output_cost_per_token": 0.000008,
|
||||||
|
"mode": "chat",
|
||||||
|
},
|
||||||
|
"fireworks_ai/accounts/fireworks/models/deepseek-v3": {
|
||||||
|
"max_tokens": 128000,
|
||||||
|
"max_input_tokens": 100000,
|
||||||
|
"max_output_tokens": 8192,
|
||||||
|
"litellm_provider": "fireworks_ai",
|
||||||
|
"input_cost_per_token": 0.0000009,
|
||||||
|
"output_cost_per_token": 0.0000009,
|
||||||
|
"mode": "chat",
|
||||||
|
},
|
||||||
|
"o3-mini": {
|
||||||
|
"max_tokens": 100000,
|
||||||
|
"max_input_tokens": 200000,
|
||||||
|
"max_output_tokens": 100000,
|
||||||
|
"input_cost_per_token": 0.0000011,
|
||||||
|
"output_cost_per_token": 0.0000044,
|
||||||
|
"cache_read_input_token_cost": 0.00000055,
|
||||||
|
"litellm_provider": "openai",
|
||||||
|
"mode": "chat",
|
||||||
|
"supports_function_calling": true,
|
||||||
|
"supports_parallel_function_calling": true,
|
||||||
|
"supports_vision": true,
|
||||||
|
"supports_prompt_caching": true,
|
||||||
|
"supports_system_messages": true,
|
||||||
|
"supports_response_schema": true
|
||||||
|
},
|
||||||
|
"openrouter/openai/o3-mini": {
|
||||||
|
"max_tokens": 100000,
|
||||||
|
"max_input_tokens": 200000,
|
||||||
|
"max_output_tokens": 100000,
|
||||||
|
"input_cost_per_token": 0.0000011,
|
||||||
|
"output_cost_per_token": 0.0000044,
|
||||||
|
"cache_read_input_token_cost": 0.00000055,
|
||||||
|
"litellm_provider": "openrouter",
|
||||||
|
"mode": "chat",
|
||||||
|
"supports_function_calling": true,
|
||||||
|
"supports_parallel_function_calling": true,
|
||||||
|
"supports_vision": true,
|
||||||
|
"supports_prompt_caching": true,
|
||||||
|
"supports_system_messages": true,
|
||||||
|
"supports_response_schema": true
|
||||||
|
},
|
||||||
|
"openrouter/openai/o3-mini-high": {
|
||||||
|
"max_tokens": 100000,
|
||||||
|
"max_input_tokens": 200000,
|
||||||
|
"max_output_tokens": 100000,
|
||||||
|
"input_cost_per_token": 0.0000011,
|
||||||
|
"output_cost_per_token": 0.0000044,
|
||||||
|
"cache_read_input_token_cost": 0.00000055,
|
||||||
|
"litellm_provider": "openrouter",
|
||||||
|
"mode": "chat",
|
||||||
|
"supports_function_calling": true,
|
||||||
|
"supports_parallel_function_calling": true,
|
||||||
|
"supports_vision": true,
|
||||||
|
"supports_prompt_caching": true,
|
||||||
|
"supports_system_messages": true,
|
||||||
|
"supports_response_schema": true
|
||||||
|
},
|
||||||
|
"openrouter/openai/gpt-4o-mini": {
|
||||||
|
"max_tokens": 16384,
|
||||||
|
"max_input_tokens": 128000,
|
||||||
|
"max_output_tokens": 16384,
|
||||||
|
"input_cost_per_token": 0.00000015,
|
||||||
|
"output_cost_per_token": 0.00000060,
|
||||||
|
"input_cost_per_token_batches": 0.000000075,
|
||||||
|
"output_cost_per_token_batches": 0.00000030,
|
||||||
|
"cache_read_input_token_cost": 0.000000075,
|
||||||
|
"litellm_provider": "openrouter",
|
||||||
|
"mode": "chat",
|
||||||
|
"supports_function_calling": true,
|
||||||
|
"supports_parallel_function_calling": true,
|
||||||
|
"supports_response_schema": true,
|
||||||
|
"supports_vision": true,
|
||||||
|
"supports_prompt_caching": true,
|
||||||
|
"supports_system_messages": true
|
||||||
|
},
|
||||||
|
"claude-3-7-sonnet-20250219": {
|
||||||
|
"max_tokens": 8192,
|
||||||
|
"max_input_tokens": 200000,
|
||||||
|
"max_output_tokens": 8192,
|
||||||
|
"input_cost_per_token": 0.000003,
|
||||||
|
"output_cost_per_token": 0.000015,
|
||||||
|
"cache_creation_input_token_cost": 0.00000375,
|
||||||
|
"cache_read_input_token_cost": 0.0000003,
|
||||||
|
"litellm_provider": "anthropic",
|
||||||
|
"mode": "chat",
|
||||||
|
"supports_function_calling": true,
|
||||||
|
"supports_vision": true,
|
||||||
|
"tool_use_system_prompt_tokens": 159,
|
||||||
|
"supports_assistant_prefill": true,
|
||||||
|
"supports_pdf_input": true,
|
||||||
|
"supports_prompt_caching": true,
|
||||||
|
"supports_response_schema": true,
|
||||||
|
"deprecation_date": "2025-10-01",
|
||||||
|
"supports_tool_choice": true
|
||||||
|
},
|
||||||
|
"anthropic/claude-3-7-sonnet-20250219": {
|
||||||
|
"max_tokens": 8192,
|
||||||
|
"max_input_tokens": 200000,
|
||||||
|
"max_output_tokens": 8192,
|
||||||
|
"input_cost_per_token": 0.000003,
|
||||||
|
"output_cost_per_token": 0.000015,
|
||||||
|
"cache_creation_input_token_cost": 0.00000375,
|
||||||
|
"cache_read_input_token_cost": 0.0000003,
|
||||||
|
"litellm_provider": "anthropic",
|
||||||
|
"mode": "chat",
|
||||||
|
"supports_function_calling": true,
|
||||||
|
"supports_vision": true,
|
||||||
|
"tool_use_system_prompt_tokens": 159,
|
||||||
|
"supports_assistant_prefill": true,
|
||||||
|
"supports_pdf_input": true,
|
||||||
|
"supports_prompt_caching": true,
|
||||||
|
"supports_response_schema": true,
|
||||||
|
"deprecation_date": "2025-10-01",
|
||||||
|
"supports_tool_choice": true
|
||||||
|
},
|
||||||
|
"openrouter/anthropic/claude-3.7-sonnet": {
|
||||||
|
"max_tokens": 8192,
|
||||||
|
"max_input_tokens": 200000,
|
||||||
|
"max_output_tokens": 8192,
|
||||||
|
"input_cost_per_token": 0.000003,
|
||||||
|
"output_cost_per_token": 0.000015,
|
||||||
|
"cache_creation_input_token_cost": 0.00000375,
|
||||||
|
"cache_read_input_token_cost": 0.0000003,
|
||||||
|
"litellm_provider": "openrouter",
|
||||||
|
"mode": "chat",
|
||||||
|
"supports_function_calling": true,
|
||||||
|
"supports_vision": true,
|
||||||
|
"tool_use_system_prompt_tokens": 159,
|
||||||
|
"supports_assistant_prefill": true,
|
||||||
|
"supports_pdf_input": true,
|
||||||
|
"supports_prompt_caching": true,
|
||||||
|
"supports_response_schema": true,
|
||||||
|
"deprecation_date": "2025-10-01",
|
||||||
|
"supports_tool_choice": true
|
||||||
|
},
|
||||||
|
"gpt-4.5-preview": {
|
||||||
|
"max_tokens": 16384,
|
||||||
|
"max_input_tokens": 128000,
|
||||||
|
"max_output_tokens": 16384,
|
||||||
|
"input_cost_per_token": 0.000075,
|
||||||
|
"output_cost_per_token": 0.00015,
|
||||||
|
"cache_read_input_token_cost": 0.0000375,
|
||||||
|
"litellm_provider": "openai",
|
||||||
|
"mode": "chat",
|
||||||
|
"supports_function_calling": true,
|
||||||
|
"supports_parallel_function_calling": true,
|
||||||
|
"supports_response_schema": true,
|
||||||
|
"supports_vision": true,
|
||||||
|
"supports_prompt_caching": true,
|
||||||
|
"supports_system_messages": true,
|
||||||
|
"supports_tool_choice": true
|
||||||
|
},
|
||||||
|
"openai/gpt-4.5-preview": {
|
||||||
|
"max_tokens": 16384,
|
||||||
|
"max_input_tokens": 128000,
|
||||||
|
"max_output_tokens": 16384,
|
||||||
|
"input_cost_per_token": 0.000075,
|
||||||
|
"output_cost_per_token": 0.00015,
|
||||||
|
"cache_read_input_token_cost": 0.0000375,
|
||||||
|
"litellm_provider": "openai",
|
||||||
|
"mode": "chat",
|
||||||
|
"supports_function_calling": true,
|
||||||
|
"supports_parallel_function_calling": true,
|
||||||
|
"supports_response_schema": true,
|
||||||
|
"supports_vision": true,
|
||||||
|
"supports_prompt_caching": true,
|
||||||
|
"supports_system_messages": true,
|
||||||
|
"supports_tool_choice": true
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|||||||
885
aider/resources/model-settings.yml
Normal file
885
aider/resources/model-settings.yml
Normal file
@@ -0,0 +1,885 @@
|
|||||||
|
- name: gpt-3.5-turbo
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
reminder: sys
|
||||||
|
|
||||||
|
- name: gpt-3.5-turbo-0125
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
reminder: sys
|
||||||
|
|
||||||
|
- name: gpt-3.5-turbo-1106
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
reminder: sys
|
||||||
|
|
||||||
|
- name: gpt-3.5-turbo-0613
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
reminder: sys
|
||||||
|
|
||||||
|
- name: gpt-3.5-turbo-16k-0613
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
reminder: sys
|
||||||
|
|
||||||
|
- name: gpt-4-turbo-2024-04-09
|
||||||
|
edit_format: udiff
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
lazy: true
|
||||||
|
reminder: sys
|
||||||
|
|
||||||
|
- name: gpt-4-turbo
|
||||||
|
edit_format: udiff
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
lazy: true
|
||||||
|
reminder: sys
|
||||||
|
|
||||||
|
- name: openai/gpt-4o
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
lazy: true
|
||||||
|
reminder: sys
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: openai/gpt-4o-2024-08-06
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
lazy: true
|
||||||
|
reminder: sys
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
|
||||||
|
- name: gpt-4o-2024-08-06
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
lazy: true
|
||||||
|
reminder: sys
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
|
||||||
|
- name: gpt-4o-2024-11-20
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
lazy: true
|
||||||
|
reminder: sys
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
|
||||||
|
- name: openai/gpt-4o-2024-11-20
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
lazy: true
|
||||||
|
reminder: sys
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
|
||||||
|
- name: gpt-4o
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
lazy: true
|
||||||
|
reminder: sys
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: gpt-4o-mini
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
lazy: true
|
||||||
|
reminder: sys
|
||||||
|
|
||||||
|
- name: openai/gpt-4o-mini
|
||||||
|
weak_model_name: openai/gpt-4o-mini
|
||||||
|
lazy: true
|
||||||
|
reminder: sys
|
||||||
|
|
||||||
|
- name: gpt-4-0125-preview
|
||||||
|
edit_format: udiff
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
lazy: true
|
||||||
|
reminder: sys
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
|
||||||
|
- name: gpt-4-1106-preview
|
||||||
|
edit_format: udiff
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
lazy: true
|
||||||
|
reminder: sys
|
||||||
|
|
||||||
|
- name: gpt-4-vision-preview
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
reminder: sys
|
||||||
|
|
||||||
|
- name: gpt-4-0314
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
reminder: sys
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
|
||||||
|
- name: gpt-4-0613
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
reminder: sys
|
||||||
|
|
||||||
|
- name: gpt-4-32k-0613
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
reminder: sys
|
||||||
|
|
||||||
|
- name: claude-3-opus-20240229
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: claude-3-5-haiku-20241022
|
||||||
|
use_repo_map: true
|
||||||
|
|
||||||
|
- name: openrouter/anthropic/claude-3-opus
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: openrouter/anthropic/claude-3-5-haiku
|
||||||
|
use_repo_map: true
|
||||||
|
|
||||||
|
- name: claude-3-sonnet-20240229
|
||||||
|
weak_model_name: claude-3-5-haiku-20241022
|
||||||
|
|
||||||
|
- name: claude-3-5-sonnet-20240620
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: claude-3-5-haiku-20241022
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25
|
||||||
|
max_tokens: 8192
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: claude-3-5-sonnet-20240620
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: anthropic/claude-3-5-sonnet-20240620
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: anthropic/claude-3-5-haiku-20241022
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25
|
||||||
|
max_tokens: 8192
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: anthropic/claude-3-5-sonnet-20240620
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: anthropic/claude-3-5-sonnet-20241022
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: anthropic/claude-3-5-haiku-20241022
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25
|
||||||
|
max_tokens: 8192
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: anthropic/claude-3-5-sonnet-20241022
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: anthropic/claude-3-7-sonnet-20250219
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: anthropic/claude-3-5-haiku-20241022
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: anthropic/claude-3-7-sonnet-20250219
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: anthropic/claude-3-7-sonnet-latest
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: anthropic/claude-3-5-haiku-20241022
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: anthropic/claude-3-7-sonnet-latest
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: claude-3-7-sonnet-20250219
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: claude-3-5-haiku-20241022
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: claude-3-7-sonnet-20250219
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: claude-3-7-sonnet-latest
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: claude-3-5-haiku-20241022
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: claude-3-7-sonnet-latest
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: bedrock/anthropic.claude-3-7-sonnet-20250219-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: bedrock/anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: bedrock/anthropic.claude-3-7-sonnet-20250219-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: bedrock/us.anthropic.claude-3-7-sonnet-20250219-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: bedrock/us.anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: bedrock/us.anthropic.claude-3-7-sonnet-20250219-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: bedrock_converse/anthropic.claude-3-7-sonnet-20250219-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: bedrock_converse/anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: bedrock_converse/anthropic.claude-3-7-sonnet-20250219-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: bedrock_converse/us.anthropic.claude-3-7-sonnet-20250219-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: bedrock_converse/us.anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: bedrock_converse/us.anthropic.claude-3-7-sonnet-20250219-v1:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: vertex_ai/claude-3-7-sonnet@20250219
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: vertex_ai/claude-3-5-haiku@20241022
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 64000
|
||||||
|
editor_model_name: vertex_ai/claude-3-7-sonnet@20250219
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: vertex_ai-anthropic_models/vertex_ai/claude-3-7-sonnet@20250219
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: vertex_ai/claude-3-5-haiku@20241022
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 64000
|
||||||
|
editor_model_name: vertex_ai-anthropic_models/vertex_ai/claude-3-7-sonnet@20250219
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: openrouter/anthropic/claude-3.7-sonnet
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: openrouter/anthropic/claude-3-5-haiku
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: openrouter/anthropic/claude-3.7-sonnet
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: openrouter/anthropic/claude-3.7-sonnet:beta
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: openrouter/anthropic/claude-3-5-haiku
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: openrouter/anthropic/claude-3.7-sonnet
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: bedrock/anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25
|
||||||
|
max_tokens: 8192
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: anthropic/claude-3-5-sonnet-latest
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: anthropic/claude-3-5-haiku-20241022
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25
|
||||||
|
max_tokens: 8192
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: anthropic/claude-3-5-sonnet-20241022
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: claude-3-5-sonnet-20241022
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: claude-3-5-haiku-20241022
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25
|
||||||
|
max_tokens: 8192
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: claude-3-5-sonnet-20241022
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: anthropic/claude-3-haiku-20240307
|
||||||
|
weak_model_name: anthropic/claude-3-haiku-20240307
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25
|
||||||
|
cache_control: true
|
||||||
|
|
||||||
|
- name: anthropic/claude-3-5-haiku-20241022
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: anthropic/claude-3-5-haiku-20241022
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25
|
||||||
|
cache_control: true
|
||||||
|
|
||||||
|
- name: bedrock/anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: bedrock/anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25
|
||||||
|
cache_control: true
|
||||||
|
|
||||||
|
- name: claude-3-5-haiku-20241022
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: claude-3-5-haiku-20241022
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25
|
||||||
|
cache_control: true
|
||||||
|
|
||||||
|
- name: vertex_ai/claude-3-5-haiku@20241022
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: vertex_ai/claude-3-5-haiku@20241022
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 4096
|
||||||
|
|
||||||
|
- name: claude-3-haiku-20240307
|
||||||
|
weak_model_name: claude-3-haiku-20240307
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25
|
||||||
|
cache_control: true
|
||||||
|
|
||||||
|
- name: openrouter/anthropic/claude-3.5-sonnet
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: openrouter/anthropic/claude-3-5-haiku
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 8192
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: openrouter/anthropic/claude-3.5-sonnet
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: openrouter/anthropic/claude-3.5-sonnet:beta
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: openrouter/anthropic/claude-3-5-haiku:beta
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 8192
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: openrouter/anthropic/claude-3.5-sonnet:beta
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: vertex_ai/claude-3-5-sonnet@20240620
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: vertex_ai/claude-3-5-haiku@20241022
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 8192
|
||||||
|
editor_model_name: vertex_ai/claude-3-5-sonnet@20240620
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: vertex_ai/claude-3-5-sonnet-v2@20241022
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: vertex_ai/claude-3-5-haiku@20241022
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 8192
|
||||||
|
editor_model_name: vertex_ai/claude-3-5-sonnet-v2@20241022
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: vertex_ai/claude-3-opus@20240229
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: vertex_ai/claude-3-5-haiku@20241022
|
||||||
|
use_repo_map: true
|
||||||
|
|
||||||
|
- name: vertex_ai/claude-3-sonnet@20240229
|
||||||
|
weak_model_name: vertex_ai/claude-3-5-haiku@20241022
|
||||||
|
|
||||||
|
- name: command-r-plus
|
||||||
|
weak_model_name: command-r-plus
|
||||||
|
use_repo_map: true
|
||||||
|
|
||||||
|
- name: command-r-08-2024
|
||||||
|
weak_model_name: command-r-08-2024
|
||||||
|
use_repo_map: true
|
||||||
|
|
||||||
|
- name: command-r-plus-08-2024
|
||||||
|
weak_model_name: command-r-plus-08-2024
|
||||||
|
use_repo_map: true
|
||||||
|
|
||||||
|
- name: groq/llama3-70b-8192
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: groq/llama3-8b-8192
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
|
||||||
|
- name: openrouter/meta-llama/llama-3-70b-instruct
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: openrouter/meta-llama/llama-3-70b-instruct
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
|
||||||
|
- name: gemini/gemini-1.5-pro-002
|
||||||
|
edit_format: diff
|
||||||
|
use_repo_map: true
|
||||||
|
|
||||||
|
- name: gemini/gemini-1.5-flash-002
|
||||||
|
|
||||||
|
- name: gemini/gemini-1.5-pro
|
||||||
|
edit_format: diff-fenced
|
||||||
|
use_repo_map: true
|
||||||
|
|
||||||
|
- name: gemini/gemini-1.5-pro-latest
|
||||||
|
edit_format: diff-fenced
|
||||||
|
use_repo_map: true
|
||||||
|
|
||||||
|
- name: gemini/gemini-1.5-pro-exp-0827
|
||||||
|
edit_format: diff-fenced
|
||||||
|
use_repo_map: true
|
||||||
|
|
||||||
|
- name: gemini/gemini-exp-1206
|
||||||
|
edit_format: diff
|
||||||
|
use_repo_map: true
|
||||||
|
|
||||||
|
- name: gemini/gemini-exp-1114
|
||||||
|
edit_format: diff
|
||||||
|
use_repo_map: true
|
||||||
|
|
||||||
|
- name: gemini/gemini-exp-1121
|
||||||
|
edit_format: diff
|
||||||
|
use_repo_map: true
|
||||||
|
|
||||||
|
- name: vertex_ai/gemini-pro-experimental
|
||||||
|
edit_format: diff-fenced
|
||||||
|
use_repo_map: true
|
||||||
|
|
||||||
|
- name: gemini/gemini-1.5-flash-exp-0827
|
||||||
|
|
||||||
|
- name: gemini/gemini-2.0-flash-exp
|
||||||
|
edit_format: diff
|
||||||
|
use_repo_map: true
|
||||||
|
|
||||||
|
- name: gemini/gemini-2.0-flash
|
||||||
|
edit_format: diff
|
||||||
|
use_repo_map: true
|
||||||
|
|
||||||
|
- name: openrouter/deepseek/deepseek-r1
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: openrouter/deepseek/deepseek-chat
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 8192
|
||||||
|
include_reasoning: true
|
||||||
|
caches_by_default: true
|
||||||
|
editor_model_name: openrouter/deepseek/deepseek-chat
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: openrouter/deepseek/deepseek-r1:free
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: openrouter/deepseek/deepseek-r1:free
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 8192
|
||||||
|
caches_by_default: true
|
||||||
|
use_temperature: false
|
||||||
|
editor_model_name: openrouter/deepseek/deepseek-r1:free
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: deepseek/deepseek-reasoner
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: deepseek/deepseek-chat
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 8192
|
||||||
|
caches_by_default: true
|
||||||
|
use_temperature: false
|
||||||
|
editor_model_name: deepseek/deepseek-chat
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: deepseek/deepseek-chat
|
||||||
|
edit_format: diff
|
||||||
|
use_repo_map: true
|
||||||
|
reminder: sys
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 8192
|
||||||
|
caches_by_default: true
|
||||||
|
|
||||||
|
- name: openrouter/deepseek/deepseek-chat:free
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: openrouter/deepseek/deepseek-chat:free
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 8192
|
||||||
|
caches_by_default: true
|
||||||
|
use_temperature: false
|
||||||
|
editor_model_name: openrouter/deepseek/deepseek-chat:free
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: deepseek/deepseek-coder
|
||||||
|
edit_format: diff
|
||||||
|
use_repo_map: true
|
||||||
|
reminder: sys
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 8192
|
||||||
|
caches_by_default: true
|
||||||
|
|
||||||
|
- name: deepseek-chat
|
||||||
|
edit_format: diff
|
||||||
|
use_repo_map: true
|
||||||
|
reminder: sys
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 8192
|
||||||
|
|
||||||
|
- name: deepseek-coder
|
||||||
|
edit_format: diff
|
||||||
|
use_repo_map: true
|
||||||
|
reminder: sys
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 8192
|
||||||
|
caches_by_default: true
|
||||||
|
|
||||||
|
- name: openrouter/deepseek/deepseek-coder
|
||||||
|
edit_format: diff
|
||||||
|
use_repo_map: true
|
||||||
|
reminder: sys
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
|
||||||
|
- name: openrouter/deepseek/deepseek-chat
|
||||||
|
edit_format: diff
|
||||||
|
use_repo_map: true
|
||||||
|
reminder: sys
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
|
||||||
|
- name: openrouter/openai/gpt-4o
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: openrouter/openai/gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
lazy: true
|
||||||
|
reminder: sys
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: openai/o1-mini
|
||||||
|
weak_model_name: openai/gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
use_system_prompt: false
|
||||||
|
use_temperature: false
|
||||||
|
editor_model_name: openai/gpt-4o
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: azure/o1-mini
|
||||||
|
weak_model_name: azure/gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
use_system_prompt: false
|
||||||
|
use_temperature: false
|
||||||
|
editor_model_name: azure/gpt-4o
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: o1-mini
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
use_system_prompt: false
|
||||||
|
use_temperature: false
|
||||||
|
editor_model_name: gpt-4o
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: openai/o1-preview
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: openai/gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
use_system_prompt: false
|
||||||
|
use_temperature: false
|
||||||
|
editor_model_name: openai/gpt-4o
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: azure/o1-preview
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: azure/gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
use_system_prompt: false
|
||||||
|
use_temperature: false
|
||||||
|
editor_model_name: azure/gpt-4o
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: azure/o1
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: azure/gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
use_temperature: false
|
||||||
|
streaming: false
|
||||||
|
editor_model_name: azure/gpt-4o
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: o1-preview
|
||||||
|
edit_format: architect
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
use_system_prompt: false
|
||||||
|
use_temperature: false
|
||||||
|
editor_model_name: gpt-4o
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: openrouter/openai/o1-mini
|
||||||
|
weak_model_name: openrouter/openai/gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
use_system_prompt: false
|
||||||
|
use_temperature: false
|
||||||
|
streaming: false
|
||||||
|
editor_model_name: openrouter/openai/gpt-4o
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: openrouter/openai/o1-preview
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: openrouter/openai/gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
use_system_prompt: false
|
||||||
|
use_temperature: false
|
||||||
|
streaming: false
|
||||||
|
editor_model_name: openrouter/openai/gpt-4o
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: openrouter/openai/o1
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: openrouter/openai/gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
use_temperature: false
|
||||||
|
streaming: false
|
||||||
|
editor_model_name: openrouter/openai/gpt-4o
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
system_prompt_prefix: "Formatting re-enabled. "
|
||||||
|
|
||||||
|
- name: openai/o1
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: openai/gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
use_temperature: false
|
||||||
|
streaming: false
|
||||||
|
editor_model_name: openai/gpt-4o
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
system_prompt_prefix: "Formatting re-enabled. "
|
||||||
|
|
||||||
|
- name: o1
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
use_temperature: false
|
||||||
|
streaming: false
|
||||||
|
editor_model_name: gpt-4o
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
system_prompt_prefix: "Formatting re-enabled. "
|
||||||
|
|
||||||
|
- name: openrouter/qwen/qwen-2.5-coder-32b-instruct
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: openrouter/qwen/qwen-2.5-coder-32b-instruct
|
||||||
|
use_repo_map: true
|
||||||
|
editor_model_name: openrouter/qwen/qwen-2.5-coder-32b-instruct
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: openrouter/deepseek/deepseek-r1-distill-llama-70b
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: openrouter/deepseek/deepseek-chat
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 8192
|
||||||
|
caches_by_default: true
|
||||||
|
use_temperature: false
|
||||||
|
editor_model_name: openrouter/deepseek/deepseek-chat
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: fireworks_ai/accounts/fireworks/models/deepseek-r1
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: fireworks_ai/accounts/fireworks/models/deepseek-v3
|
||||||
|
use_repo_map: true
|
||||||
|
use_temperature: false
|
||||||
|
streaming: true
|
||||||
|
editor_model_name: fireworks_ai/accounts/fireworks/models/deepseek-v3
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
reasoning_tag: think
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 160000
|
||||||
|
|
||||||
|
- name: fireworks_ai/accounts/fireworks/models/deepseek-v3
|
||||||
|
edit_format: diff
|
||||||
|
use_repo_map: true
|
||||||
|
reminder: sys
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 128000
|
||||||
|
|
||||||
|
- name: openai/o3-mini
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
use_temperature: false
|
||||||
|
editor_model_name: gpt-4o
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
system_prompt_prefix: "Formatting re-enabled. "
|
||||||
|
|
||||||
|
- name: o3-mini
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
use_temperature: false
|
||||||
|
editor_model_name: gpt-4o
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
system_prompt_prefix: "Formatting re-enabled. "
|
||||||
|
|
||||||
|
- name: openrouter/openai/o3-mini
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: openrouter/openai/gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
use_temperature: false
|
||||||
|
editor_model_name: openrouter/openai/gpt-4o
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
system_prompt_prefix: "Formatting re-enabled. "
|
||||||
|
|
||||||
|
- name: openrouter/openai/o3-mini-high
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: openrouter/openai/gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
use_temperature: false
|
||||||
|
editor_model_name: openrouter/openai/gpt-4o
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
system_prompt_prefix: "Formatting re-enabled. "
|
||||||
|
|
||||||
|
- name: azure/o3-mini
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: azure/gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
use_temperature: false
|
||||||
|
editor_model_name: azure/gpt-4o
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
system_prompt_prefix: "Formatting re-enabled. "
|
||||||
|
|
||||||
|
- name: gpt-4.5-preview
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
lazy: true
|
||||||
|
reminder: sys
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
editor_model_name: gpt-4o
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: openai/gpt-4.5-preview
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
lazy: true
|
||||||
|
reminder: sys
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
editor_model_name: openai/gpt-4o
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
|
||||||
|
- name: fireworks_ai/accounts/fireworks/models/qwq-32b
|
||||||
|
reasoning_tag: think
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: fireworks_ai/accounts/fireworks/models/qwen2p5-coder-32b-instruct
|
||||||
|
use_repo_map: true
|
||||||
|
editor_model_name: fireworks_ai/accounts/fireworks/models/qwen2p5-coder-32b-instruct
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
reminder: user
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
use_temperature: 0.6
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 32000
|
||||||
|
top_p: 0.95
|
||||||
|
|
||||||
|
- name: groq/qwen-qwq-32b
|
||||||
|
reasoning_tag: think
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: groq/qwen-2.5-coder-32b
|
||||||
|
use_repo_map: true
|
||||||
|
editor_model_name: groq/qwen-2.5-coder-32b
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
use_temperature: 0.6
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 128000
|
||||||
|
top_p: 0.95
|
||||||
|
|
||||||
@@ -1,98 +1,61 @@
|
|||||||
import hashlib
|
|
||||||
import json
|
|
||||||
import time
|
|
||||||
|
|
||||||
from aider.dump import dump # noqa: F401
|
from aider.dump import dump # noqa: F401
|
||||||
from aider.exceptions import LiteLLMExceptions
|
from aider.utils import format_messages
|
||||||
from aider.llm import litellm
|
|
||||||
|
|
||||||
# from diskcache import Cache
|
|
||||||
|
|
||||||
|
|
||||||
CACHE_PATH = "~/.aider.send.cache.v1"
|
def sanity_check_messages(messages):
|
||||||
CACHE = None
|
"""Check if messages alternate between user and assistant roles.
|
||||||
# CACHE = Cache(CACHE_PATH)
|
System messages can be interspersed anywhere.
|
||||||
|
Also verifies the last non-system message is from the user.
|
||||||
|
Returns True if valid, False otherwise."""
|
||||||
|
last_role = None
|
||||||
|
last_non_system_role = None
|
||||||
|
|
||||||
RETRY_TIMEOUT = 60
|
for msg in messages:
|
||||||
|
role = msg.get("role")
|
||||||
|
if role == "system":
|
||||||
def send_completion(
|
|
||||||
model_name,
|
|
||||||
messages,
|
|
||||||
functions,
|
|
||||||
stream,
|
|
||||||
temperature=0,
|
|
||||||
extra_params=None,
|
|
||||||
):
|
|
||||||
kwargs = dict(
|
|
||||||
model=model_name,
|
|
||||||
messages=messages,
|
|
||||||
stream=stream,
|
|
||||||
)
|
|
||||||
if temperature is not None:
|
|
||||||
kwargs["temperature"] = temperature
|
|
||||||
|
|
||||||
if functions is not None:
|
|
||||||
function = functions[0]
|
|
||||||
kwargs["tools"] = [dict(type="function", function=function)]
|
|
||||||
kwargs["tool_choice"] = {"type": "function", "function": {"name": function["name"]}}
|
|
||||||
|
|
||||||
if extra_params is not None:
|
|
||||||
kwargs.update(extra_params)
|
|
||||||
|
|
||||||
key = json.dumps(kwargs, sort_keys=True).encode()
|
|
||||||
|
|
||||||
# Generate SHA1 hash of kwargs and append it to chat_completion_call_hashes
|
|
||||||
hash_object = hashlib.sha1(key)
|
|
||||||
|
|
||||||
if not stream and CACHE is not None and key in CACHE:
|
|
||||||
return hash_object, CACHE[key]
|
|
||||||
|
|
||||||
res = litellm.completion(**kwargs)
|
|
||||||
|
|
||||||
if not stream and CACHE is not None:
|
|
||||||
CACHE[key] = res
|
|
||||||
|
|
||||||
return hash_object, res
|
|
||||||
|
|
||||||
|
|
||||||
def simple_send_with_retries(model, messages):
|
|
||||||
litellm_ex = LiteLLMExceptions()
|
|
||||||
|
|
||||||
retry_delay = 0.125
|
|
||||||
while True:
|
|
||||||
try:
|
|
||||||
kwargs = {
|
|
||||||
"model_name": model.name,
|
|
||||||
"messages": messages,
|
|
||||||
"functions": None,
|
|
||||||
"stream": False,
|
|
||||||
"temperature": None if not model.use_temperature else 0,
|
|
||||||
"extra_params": model.extra_params,
|
|
||||||
}
|
|
||||||
|
|
||||||
_hash, response = send_completion(**kwargs)
|
|
||||||
if not response or not hasattr(response, "choices") or not response.choices:
|
|
||||||
return None
|
|
||||||
return response.choices[0].message.content
|
|
||||||
except litellm_ex.exceptions_tuple() as err:
|
|
||||||
ex_info = litellm_ex.get_ex_info(err)
|
|
||||||
|
|
||||||
print(str(err))
|
|
||||||
if ex_info.description:
|
|
||||||
print(ex_info.description)
|
|
||||||
|
|
||||||
should_retry = ex_info.retry
|
|
||||||
if should_retry:
|
|
||||||
retry_delay *= 2
|
|
||||||
if retry_delay > RETRY_TIMEOUT:
|
|
||||||
should_retry = False
|
|
||||||
|
|
||||||
if not should_retry:
|
|
||||||
return None
|
|
||||||
|
|
||||||
print(f"Retrying in {retry_delay:.1f} seconds...")
|
|
||||||
time.sleep(retry_delay)
|
|
||||||
continue
|
continue
|
||||||
except AttributeError:
|
|
||||||
return None
|
if last_role and role == last_role:
|
||||||
|
turns = format_messages(messages)
|
||||||
|
raise ValueError("Messages don't properly alternate user/assistant:\n\n" + turns)
|
||||||
|
|
||||||
|
last_role = role
|
||||||
|
last_non_system_role = role
|
||||||
|
|
||||||
|
# Ensure last non-system message is from user
|
||||||
|
return last_non_system_role == "user"
|
||||||
|
|
||||||
|
|
||||||
|
def ensure_alternating_roles(messages):
|
||||||
|
"""Ensure messages alternate between 'assistant' and 'user' roles.
|
||||||
|
|
||||||
|
Inserts empty messages of the opposite role when consecutive messages
|
||||||
|
of the same role are found.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
messages: List of message dictionaries with 'role' and 'content' keys.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of messages with alternating roles.
|
||||||
|
"""
|
||||||
|
if not messages:
|
||||||
|
return messages
|
||||||
|
|
||||||
|
fixed_messages = []
|
||||||
|
prev_role = None
|
||||||
|
|
||||||
|
for msg in messages:
|
||||||
|
current_role = msg.get("role") # Get 'role', None if missing
|
||||||
|
|
||||||
|
# If current role same as previous, insert empty message
|
||||||
|
# of the opposite role
|
||||||
|
if current_role == prev_role:
|
||||||
|
if current_role == "user":
|
||||||
|
fixed_messages.append({"role": "assistant", "content": ""})
|
||||||
|
else:
|
||||||
|
fixed_messages.append({"role": "user", "content": ""})
|
||||||
|
|
||||||
|
fixed_messages.append(msg)
|
||||||
|
prev_role = current_role
|
||||||
|
|
||||||
|
return fixed_messages
|
||||||
|
|||||||
@@ -41,6 +41,7 @@ ROOT_IMPORTANT_FILES = [
|
|||||||
"composer.lock",
|
"composer.lock",
|
||||||
"pom.xml",
|
"pom.xml",
|
||||||
"build.gradle",
|
"build.gradle",
|
||||||
|
"build.gradle.kts",
|
||||||
"build.sbt",
|
"build.sbt",
|
||||||
"go.mod",
|
"go.mod",
|
||||||
"go.sum",
|
"go.sum",
|
||||||
|
|||||||
@@ -14,3 +14,4 @@ install_properly = "https://aider.chat/docs/troubleshooting/imports.html"
|
|||||||
analytics = "https://aider.chat/docs/more/analytics.html"
|
analytics = "https://aider.chat/docs/more/analytics.html"
|
||||||
release_notes = "https://aider.chat/HISTORY.html#release-notes"
|
release_notes = "https://aider.chat/HISTORY.html#release-notes"
|
||||||
edit_formats = "https://aider.chat/docs/more/edit-formats.html"
|
edit_formats = "https://aider.chat/docs/more/edit-formats.html"
|
||||||
|
models_and_keys = "https://aider.chat/docs/troubleshooting/models-and-keys.html"
|
||||||
|
|||||||
@@ -112,7 +112,7 @@ def format_messages(messages, title=None):
|
|||||||
output.append(f"{title.upper()} {'*' * 50}")
|
output.append(f"{title.upper()} {'*' * 50}")
|
||||||
|
|
||||||
for msg in messages:
|
for msg in messages:
|
||||||
output.append("")
|
output.append("-------")
|
||||||
role = msg["role"].upper()
|
role = msg["role"].upper()
|
||||||
content = msg.get("content")
|
content = msg.get("content")
|
||||||
if isinstance(content, list): # Handle list content (e.g., image messages)
|
if isinstance(content, list): # Handle list content (e.g., image messages)
|
||||||
|
|||||||
@@ -95,7 +95,9 @@ class FileWatcher:
|
|||||||
if self.verbose:
|
if self.verbose:
|
||||||
dump(rel_path)
|
dump(rel_path)
|
||||||
|
|
||||||
if self.gitignore_spec and self.gitignore_spec.match_file(str(rel_path)):
|
if self.gitignore_spec and self.gitignore_spec.match_file(
|
||||||
|
rel_path.as_posix() + ("/" if path_abs.is_dir() else "")
|
||||||
|
):
|
||||||
return False
|
return False
|
||||||
|
|
||||||
if self.verbose:
|
if self.verbose:
|
||||||
@@ -108,28 +110,52 @@ class FileWatcher:
|
|||||||
except Exception:
|
except Exception:
|
||||||
return
|
return
|
||||||
|
|
||||||
|
def get_roots_to_watch(self):
|
||||||
|
"""Determine which root paths to watch based on gitignore rules"""
|
||||||
|
if self.gitignore_spec:
|
||||||
|
roots = [
|
||||||
|
str(path)
|
||||||
|
for path in self.root.iterdir()
|
||||||
|
if not self.gitignore_spec.match_file(
|
||||||
|
path.relative_to(self.root).as_posix() + ("/" if path.is_dir() else "")
|
||||||
|
)
|
||||||
|
]
|
||||||
|
# Fallback to watching root if all top-level items are filtered out
|
||||||
|
return roots if roots else [str(self.root)]
|
||||||
|
return [str(self.root)]
|
||||||
|
|
||||||
|
def handle_changes(self, changes):
|
||||||
|
"""Process the detected changes and update state"""
|
||||||
|
if not changes:
|
||||||
|
return False
|
||||||
|
|
||||||
|
changed_files = {str(Path(change[1])) for change in changes}
|
||||||
|
self.changed_files.update(changed_files)
|
||||||
|
self.io.interrupt_input()
|
||||||
|
return True
|
||||||
|
|
||||||
|
def watch_files(self):
|
||||||
|
"""Watch for file changes and process them"""
|
||||||
|
try:
|
||||||
|
roots_to_watch = self.get_roots_to_watch()
|
||||||
|
|
||||||
|
for changes in watch(
|
||||||
|
*roots_to_watch, watch_filter=self.filter_func, stop_event=self.stop_event
|
||||||
|
):
|
||||||
|
if self.handle_changes(changes):
|
||||||
|
return
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
if self.verbose:
|
||||||
|
dump(f"File watcher error: {e}")
|
||||||
|
raise e
|
||||||
|
|
||||||
def start(self):
|
def start(self):
|
||||||
"""Start watching for file changes"""
|
"""Start watching for file changes"""
|
||||||
self.stop_event = threading.Event()
|
self.stop_event = threading.Event()
|
||||||
self.changed_files = set()
|
self.changed_files = set()
|
||||||
|
|
||||||
def watch_files():
|
self.watcher_thread = threading.Thread(target=self.watch_files, daemon=True)
|
||||||
try:
|
|
||||||
for changes in watch(
|
|
||||||
str(self.root), watch_filter=self.filter_func, stop_event=self.stop_event
|
|
||||||
):
|
|
||||||
if not changes:
|
|
||||||
continue
|
|
||||||
changed_files = {str(Path(change[1])) for change in changes}
|
|
||||||
self.changed_files.update(changed_files)
|
|
||||||
self.io.interrupt_input()
|
|
||||||
return
|
|
||||||
except Exception as e:
|
|
||||||
if self.verbose:
|
|
||||||
dump(f"File watcher error: {e}")
|
|
||||||
raise e
|
|
||||||
|
|
||||||
self.watcher_thread = threading.Thread(target=watch_files, daemon=True)
|
|
||||||
self.watcher_thread.start()
|
self.watcher_thread.start()
|
||||||
|
|
||||||
def stop(self):
|
def stop(self):
|
||||||
|
|||||||
@@ -23,6 +23,133 @@ cog.out(text)
|
|||||||
]]]-->
|
]]]-->
|
||||||
|
|
||||||
|
|
||||||
|
### main branch
|
||||||
|
|
||||||
|
- Improved support for thinking/reasoningmodels:
|
||||||
|
- Added `--thinking-tokens` CLI option to control token budget for models that support thinking.
|
||||||
|
- Display thinking/reasoning content from LLMs which return it.
|
||||||
|
- Enhanced handling of reasoning tags to better clean up model responses.
|
||||||
|
- Added deprecation warning for `remove_reasoning` setting, now replaced by `reasoning_tag`.
|
||||||
|
- Aider will notify you when it's completed the last request and needs your input:
|
||||||
|
- Added [notifications when LLM responses are ready](https://aider.chat/docs/usage/notifications.html) with `--notifications` flag.
|
||||||
|
- Specify desktop notification command with `--notifications-command`.
|
||||||
|
- Added support for QWQ 32B.
|
||||||
|
- Switch to `tree-sitter-language-pack` for tree sitter support.
|
||||||
|
- Improved error handling for EOF (Ctrl+D) in user input prompts.
|
||||||
|
- Added helper function to ensure hex color values have a # prefix.
|
||||||
|
- Fixed handling of Git errors when reading staged files.
|
||||||
|
- Improved SSL verification control for model information requests.
|
||||||
|
- Improved empty LLM response handling with clearer warning messages.
|
||||||
|
- Fixed Git identity retrieval to respect global configuration, by Akira Komamura.
|
||||||
|
- Offer to install dependencies for Bedrock and Vertex AI models.
|
||||||
|
- Deprecated model shortcut args (like --4o, --opus) in favor of the --model flag.
|
||||||
|
- Added C# language support for tree-sitter parsing.
|
||||||
|
- Improved handling of NO_COLOR environment variable for disabling colored output.
|
||||||
|
- Simplified reasoning content handling in stream processing.
|
||||||
|
- Added support for both reasoning and reasoning_content fields from different models.
|
||||||
|
- Aider wrote 85% of the code in this release.
|
||||||
|
|
||||||
|
### Aider v0.75.3
|
||||||
|
|
||||||
|
- Support for V3 free on OpenRouter: `--model openrouter/deepseek/deepseek-chat:free`.
|
||||||
|
|
||||||
|
### Aider v0.75.2
|
||||||
|
|
||||||
|
- Added support for Claude 3.7 Sonnet models on OpenRouter, Bedrock and Vertex AI.
|
||||||
|
- Updated default model to Claude 3.7 Sonnet on OpenRouter.
|
||||||
|
- Added support for GPT-4.5-preview model.
|
||||||
|
- Added support for Claude 3.7 Sonnet:beta on OpenRouter.
|
||||||
|
- Fixed weak_model_name patterns to match main model name patterns for some models.
|
||||||
|
|
||||||
|
### Aider v0.75.1
|
||||||
|
|
||||||
|
- Added support for `openrouter/anthropic/claude-3.7-sonnet`
|
||||||
|
|
||||||
|
### Aider v0.75.0
|
||||||
|
|
||||||
|
- Basic support for Claude 3.7 Sonnet
|
||||||
|
- Use `--model sonnet` to use the new 3.7
|
||||||
|
- Thinking support coming soon.
|
||||||
|
- Bugfix to `/editor` command.
|
||||||
|
- Aider wrote 46% of the code in this release.
|
||||||
|
|
||||||
|
### Aider v0.74.3
|
||||||
|
|
||||||
|
- Downgrade streamlit dependency to avoid threading bug.
|
||||||
|
- Added support for tree-sitter language pack.
|
||||||
|
- Added openrouter/o3-mini-high model configuration.
|
||||||
|
- Added build.gradle.kts to special files for Kotlin project support, by Lucas Shadler.
|
||||||
|
|
||||||
|
### Aider v0.74.2
|
||||||
|
|
||||||
|
- Prevent more than one cache warming thread from becoming active.
|
||||||
|
- Fixed continuation prompt ". " for multiline input.
|
||||||
|
- Added HCL (Terraform) syntax support, by Warren Krewenki.
|
||||||
|
|
||||||
|
### Aider v0.74.1
|
||||||
|
|
||||||
|
- Have o1 & o3-mini generate markdown by sending the magic "Formatting re-enabled." string.
|
||||||
|
- Bugfix for multi-line inputs, which should not include the ". " continuation prompt.
|
||||||
|
|
||||||
|
### Aider v0.74.0
|
||||||
|
|
||||||
|
- Dynamically changes the Ollama context window to hold the current chat.
|
||||||
|
- Better support for o3-mini, DeepSeek V3 & R1, o1-mini, o1 especially via third-party API providers.
|
||||||
|
- Remove `<think>` tags from R1 responses for commit messages (and other weak model uses).
|
||||||
|
- Can now specify `use_temperature: <float>` in model settings, not just true/false.
|
||||||
|
- The full docker container now includes `boto3` for Bedrock.
|
||||||
|
- Docker containers now set `HOME=/app` which is the normal project mount-point, to persist `~/.aider`.
|
||||||
|
- Bugfix to prevent creating incorrect filenames like `python`, `php`, etc.
|
||||||
|
- Bugfix for `--timeout`
|
||||||
|
- Bugfix so that `/model` now correctly reports that the weak model is not changed.
|
||||||
|
- Bugfix so that multi-line mode persists through ^C at confirmation prompts.
|
||||||
|
- Watch files now fully ignores top-level directories named in ignore files, to reduce the chance of hitting OS watch limits. Helpful to ignore giant subtrees like `node_modules`.
|
||||||
|
- Fast startup with more providers and when model metadata provided in local files.
|
||||||
|
- Improved .gitignore handling:
|
||||||
|
- Honor ignores already in effect regardless of how they've been configured.
|
||||||
|
- Check for .env only when the file exists.
|
||||||
|
- Yes/No prompts now accept All/Skip as alias for Y/N even when not processing a group of confirmations.
|
||||||
|
- Aider wrote 77% of the code in this release.
|
||||||
|
|
||||||
|
### Aider v0.73.0
|
||||||
|
|
||||||
|
- Full support for o3-mini: `aider --model o3-mini`
|
||||||
|
- New `--reasoning-effort` argument: low, medium, high.
|
||||||
|
- Improved handling of context window size limits, with better messaging and Ollama-specific guidance.
|
||||||
|
- Added support for removing model-specific reasoning tags from responses with `remove_reasoning: tagname` model setting.
|
||||||
|
- Auto-create parent directories when creating new files, by xqyz.
|
||||||
|
- Support for R1 free on OpenRouter: `--model openrouter/deepseek/deepseek-r1:free`
|
||||||
|
- Aider wrote 69% of the code in this release.
|
||||||
|
|
||||||
|
### Aider v0.72.3
|
||||||
|
|
||||||
|
- Enforce user/assistant turn order to avoid R1 errors, by miradnanali.
|
||||||
|
- Case-insensitive model name matching while preserving original case.
|
||||||
|
|
||||||
|
### Aider v0.72.2
|
||||||
|
- Harden against user/assistant turn order problems which cause R1 errors.
|
||||||
|
|
||||||
|
### Aider v0.72.1
|
||||||
|
- Fix model metadata for `openrouter/deepseek/deepseek-r1`
|
||||||
|
|
||||||
|
### Aider v0.72.0
|
||||||
|
- Support for DeepSeek R1.
|
||||||
|
- Use shortcut: `--model r1`
|
||||||
|
- Also via OpenRouter: `--model openrouter/deepseek/deepseek-r1`
|
||||||
|
- Added Kotlin syntax support to repo map, by Paul Walker.
|
||||||
|
- Added `--line-endings` for file writing, by Titusz Pan.
|
||||||
|
- Added examples_as_sys_msg=True for GPT-4o models, improves benchmark scores.
|
||||||
|
- Bumped all dependencies, to pick up litellm support for o1 system messages.
|
||||||
|
- Bugfix for turn taking when reflecting lint/test errors.
|
||||||
|
- Aider wrote 52% of the code in this release.
|
||||||
|
|
||||||
|
### Aider v0.71.1
|
||||||
|
|
||||||
|
- Fix permissions issue in Docker images.
|
||||||
|
- Added read-only file announcements.
|
||||||
|
- Bugfix: ASCII fallback for unicode errors.
|
||||||
|
- Bugfix: integer indices for list slicing in repomap calculations.
|
||||||
|
|
||||||
### Aider v0.71.0
|
### Aider v0.71.0
|
||||||
|
|
||||||
- Prompts to help DeepSeek work better when alternating between `/ask` and `/code`.
|
- Prompts to help DeepSeek work better when alternating between `/ask` and `/code`.
|
||||||
@@ -36,16 +163,13 @@ cog.out(text)
|
|||||||
- Turn off fancy input and watch files if terminal is dumb.
|
- Turn off fancy input and watch files if terminal is dumb.
|
||||||
- Added support for custom voice format and input device settings.
|
- Added support for custom voice format and input device settings.
|
||||||
- Disabled Streamlit email prompt, by apaz-cli.
|
- Disabled Streamlit email prompt, by apaz-cli.
|
||||||
|
- Docker container runs as non-root user.
|
||||||
- Fixed lint command handling of nested spaced strings, by Aaron Weisberg.
|
- Fixed lint command handling of nested spaced strings, by Aaron Weisberg.
|
||||||
- Added token count feedback when adding command output to chat.
|
- Added token count feedback when adding command output to chat.
|
||||||
- Improved error handling for large audio files with automatic format conversion.
|
- Improved error handling for large audio files with automatic format conversion.
|
||||||
- Improved handling of git repo index errors, by Krazer.
|
- Improved handling of git repo index errors, by Krazer.
|
||||||
- Improved unicode handling in console output with ASCII fallback.
|
- Improved unicode handling in console output with ASCII fallback.
|
||||||
- Added AssertionError to git error handling.
|
- Added AssertionError, AttributeError to git error handling.
|
||||||
- Fixed file export path in voice format conversion.
|
|
||||||
- Added AttributeError to git error handling.
|
|
||||||
- Improved markdown rendering performance with adaptive delay based on render time.
|
|
||||||
- Fixed typo in model metadata variable name.
|
|
||||||
- Aider wrote 60% of the code in this release.
|
- Aider wrote 60% of the code in this release.
|
||||||
|
|
||||||
### Aider v0.70.0
|
### Aider v0.70.0
|
||||||
|
|||||||
@@ -3167,8 +3167,8 @@
|
|||||||
malkoG: 83
|
malkoG: 83
|
||||||
start_tag: v0.64.0
|
start_tag: v0.64.0
|
||||||
total_lines: 670
|
total_lines: 670
|
||||||
- aider_percentage: 81.65
|
- aider_percentage: 86.17
|
||||||
aider_total: 574
|
aider_total: 841
|
||||||
end_date: '2024-12-01'
|
end_date: '2024-12-01'
|
||||||
end_tag: v0.66.0
|
end_tag: v0.66.0
|
||||||
file_counts:
|
file_counts:
|
||||||
@@ -3240,18 +3240,52 @@
|
|||||||
Paul Gauthier (aider): 103
|
Paul Gauthier (aider): 103
|
||||||
tests/browser/test_browser.py:
|
tests/browser/test_browser.py:
|
||||||
Paul Gauthier: 1
|
Paul Gauthier: 1
|
||||||
|
tests/fixtures/languages/c/test.c:
|
||||||
|
Paul Gauthier (aider): 6
|
||||||
|
tests/fixtures/languages/cpp/test.cpp:
|
||||||
|
Paul Gauthier (aider): 6
|
||||||
|
tests/fixtures/languages/csharp/test.cs:
|
||||||
|
Paul Gauthier (aider): 39
|
||||||
|
tests/fixtures/languages/elisp/test.el:
|
||||||
|
Paul Gauthier (aider): 25
|
||||||
|
tests/fixtures/languages/elixir/test.ex:
|
||||||
|
Paul Gauthier (aider): 5
|
||||||
|
tests/fixtures/languages/elm/test.elm:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
Paul Gauthier (aider): 37
|
||||||
|
tests/fixtures/languages/go/test.go:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
Paul Gauthier (aider): 41
|
||||||
|
tests/fixtures/languages/java/test.java:
|
||||||
|
Paul Gauthier: 2
|
||||||
|
Paul Gauthier (aider): 14
|
||||||
tests/fixtures/languages/javascript/test.js:
|
tests/fixtures/languages/javascript/test.js:
|
||||||
Paul Gauthier: 1
|
Paul Gauthier: 1
|
||||||
Paul Gauthier (aider): 25
|
Paul Gauthier (aider): 25
|
||||||
|
tests/fixtures/languages/ocaml/test.ml:
|
||||||
|
Paul Gauthier: 2
|
||||||
|
Paul Gauthier (aider): 17
|
||||||
|
tests/fixtures/languages/php/test.php:
|
||||||
|
Paul Gauthier (aider): 5
|
||||||
tests/fixtures/languages/python/test.py:
|
tests/fixtures/languages/python/test.py:
|
||||||
Paul Gauthier: 2
|
Paul Gauthier: 2
|
||||||
Paul Gauthier (aider): 26
|
Paul Gauthier (aider): 26
|
||||||
|
tests/fixtures/languages/ql/test.ql:
|
||||||
|
Paul Gauthier (aider): 3
|
||||||
|
tests/fixtures/languages/ruby/test.rb:
|
||||||
|
Paul Gauthier (aider): 3
|
||||||
|
tests/fixtures/languages/rust/test.rs:
|
||||||
|
Paul Gauthier (aider): 33
|
||||||
|
tests/fixtures/languages/tsx/test.tsx:
|
||||||
|
Paul Gauthier (aider): 30
|
||||||
|
tests/fixtures/languages/typescript/test.ts:
|
||||||
|
Paul Gauthier (aider): 3
|
||||||
grand_total:
|
grand_total:
|
||||||
Paul Gauthier: 99
|
Paul Gauthier: 105
|
||||||
Paul Gauthier (aider): 574
|
Paul Gauthier (aider): 841
|
||||||
Philippe de Reynal: 30
|
Philippe de Reynal: 30
|
||||||
start_tag: v0.65.0
|
start_tag: v0.65.0
|
||||||
total_lines: 703
|
total_lines: 976
|
||||||
- aider_percentage: 67.86
|
- aider_percentage: 67.86
|
||||||
aider_total: 437
|
aider_total: 437
|
||||||
end_date: '2024-12-06'
|
end_date: '2024-12-06'
|
||||||
@@ -3545,3 +3579,338 @@
|
|||||||
mdk: 34
|
mdk: 34
|
||||||
start_tag: v0.69.0
|
start_tag: v0.69.0
|
||||||
total_lines: 1179
|
total_lines: 1179
|
||||||
|
- aider_percentage: 60.36
|
||||||
|
aider_total: 236
|
||||||
|
end_date: '2025-01-10'
|
||||||
|
end_tag: v0.71.0
|
||||||
|
file_counts:
|
||||||
|
aider/__init__.py:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
aider/args.py:
|
||||||
|
Paul Gauthier: 2
|
||||||
|
aider/coders/base_coder.py:
|
||||||
|
Paul Gauthier: 7
|
||||||
|
Paul Gauthier (aider): 13
|
||||||
|
aider/commands.py:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
Paul Gauthier (aider): 22
|
||||||
|
aider/io.py:
|
||||||
|
Paul Gauthier: 3
|
||||||
|
Paul Gauthier (aider): 16
|
||||||
|
aider/linter.py:
|
||||||
|
Aaron Weisberg: 5
|
||||||
|
aider/main.py:
|
||||||
|
Paul Gauthier: 7
|
||||||
|
Paul Gauthier (aider): 13
|
||||||
|
apaz-cli: 18
|
||||||
|
aider/mdstream.py:
|
||||||
|
Paul Gauthier: 38
|
||||||
|
Paul Gauthier (aider): 58
|
||||||
|
aider/models.py:
|
||||||
|
Paul Gauthier: 11
|
||||||
|
Paul Gauthier (aider): 2
|
||||||
|
aider/repo.py:
|
||||||
|
Krazer: 10
|
||||||
|
Paul Gauthier: 5
|
||||||
|
aider/run_cmd.py:
|
||||||
|
Aaron Weisberg: 2
|
||||||
|
aider/utils.py:
|
||||||
|
Paul Gauthier: 9
|
||||||
|
aider/voice.py:
|
||||||
|
Paul Gauthier: 11
|
||||||
|
Paul Gauthier (aider): 13
|
||||||
|
aider/watch.py:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
benchmark/Dockerfile:
|
||||||
|
Josh Vera: 1
|
||||||
|
Paul Maunders: 12
|
||||||
|
benchmark/benchmark.py:
|
||||||
|
Nimesh Ghelani: 1
|
||||||
|
Paul Gauthier: 6
|
||||||
|
Paul Gauthier (aider): 30
|
||||||
|
benchmark/problem_stats.py:
|
||||||
|
Paul Gauthier (aider): 5
|
||||||
|
docker/Dockerfile:
|
||||||
|
Paul Gauthier (aider): 32
|
||||||
|
scripts/update-history.py:
|
||||||
|
Paul Gauthier (aider): 1
|
||||||
|
tests/basic/test_commands.py:
|
||||||
|
Paul Gauthier: 2
|
||||||
|
tests/basic/test_io.py:
|
||||||
|
Paul Gauthier (aider): 6
|
||||||
|
tests/basic/test_linter.py:
|
||||||
|
Aaron Weisberg: 2
|
||||||
|
tests/basic/test_models.py:
|
||||||
|
Paul Gauthier (aider): 25
|
||||||
|
grand_total:
|
||||||
|
Aaron Weisberg: 9
|
||||||
|
Josh Vera: 1
|
||||||
|
Krazer: 10
|
||||||
|
Nimesh Ghelani: 1
|
||||||
|
Paul Gauthier: 104
|
||||||
|
Paul Gauthier (aider): 236
|
||||||
|
Paul Maunders: 12
|
||||||
|
apaz-cli: 18
|
||||||
|
start_tag: v0.70.0
|
||||||
|
total_lines: 391
|
||||||
|
- aider_percentage: 48.76
|
||||||
|
aider_total: 138
|
||||||
|
end_date: '2025-01-20'
|
||||||
|
end_tag: v0.72.0
|
||||||
|
file_counts:
|
||||||
|
.github/workflows/docker-build-test.yml:
|
||||||
|
Paul Gauthier (aider): 38
|
||||||
|
.github/workflows/pages.yml:
|
||||||
|
Paul Gauthier: 3
|
||||||
|
Paul Gauthier (aider): 1
|
||||||
|
.github/workflows/ubuntu-tests.yml:
|
||||||
|
Paul Gauthier (aider): 8
|
||||||
|
.github/workflows/windows-tests.yml:
|
||||||
|
Paul Gauthier (aider): 8
|
||||||
|
aider/__init__.py:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
aider/args.py:
|
||||||
|
Titusz Pan: 6
|
||||||
|
aider/coders/base_coder.py:
|
||||||
|
Paul Gauthier: 11
|
||||||
|
aider/coders/single_wholefile_func_coder.py:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
aider/coders/wholefile_func_coder.py:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
aider/commands.py:
|
||||||
|
Paul Gauthier: 3
|
||||||
|
aider/history.py:
|
||||||
|
Paul Gauthier: 7
|
||||||
|
aider/io.py:
|
||||||
|
Paul Gauthier (aider): 14
|
||||||
|
Titusz Pan: 2
|
||||||
|
aider/main.py:
|
||||||
|
Titusz Pan: 1
|
||||||
|
aider/models.py:
|
||||||
|
Paul Gauthier: 16
|
||||||
|
aider/queries/tree-sitter-kotlin-tags.scm:
|
||||||
|
Paul Walker: 27
|
||||||
|
aider/repomap.py:
|
||||||
|
Paul Gauthier (aider): 2
|
||||||
|
aider/sendchat.py:
|
||||||
|
Paul Gauthier: 9
|
||||||
|
Paul Gauthier (aider): 22
|
||||||
|
aider/utils.py:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
aider/website/docs/leaderboards/index.md:
|
||||||
|
Paul Gauthier: 2
|
||||||
|
benchmark/benchmark.py:
|
||||||
|
Paul Gauthier: 9
|
||||||
|
benchmark/rsync.sh:
|
||||||
|
Paul Gauthier: 21
|
||||||
|
docker/Dockerfile:
|
||||||
|
Paul Gauthier: 2
|
||||||
|
Paul Gauthier (aider): 6
|
||||||
|
scripts/my_models.py:
|
||||||
|
Paul Gauthier: 3
|
||||||
|
scripts/update-docs.sh:
|
||||||
|
Paul Gauthier: 2
|
||||||
|
tests/basic/test_io.py:
|
||||||
|
Paul Gauthier (aider): 39
|
||||||
|
tests/basic/test_repomap.py:
|
||||||
|
Paul Walker: 1
|
||||||
|
tests/fixtures/languages/kotlin/test.kt:
|
||||||
|
Paul Walker: 16
|
||||||
|
grand_total:
|
||||||
|
Paul Gauthier: 92
|
||||||
|
Paul Gauthier (aider): 138
|
||||||
|
Paul Walker: 44
|
||||||
|
Titusz Pan: 9
|
||||||
|
start_tag: v0.71.0
|
||||||
|
total_lines: 283
|
||||||
|
- aider_percentage: 69.44
|
||||||
|
aider_total: 284
|
||||||
|
end_date: '2025-01-31'
|
||||||
|
end_tag: v0.73.0
|
||||||
|
file_counts:
|
||||||
|
aider/__init__.py:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
aider/args.py:
|
||||||
|
Paul Gauthier: 3
|
||||||
|
Paul Gauthier (aider): 2
|
||||||
|
aider/coders/base_coder.py:
|
||||||
|
Paul Gauthier: 37
|
||||||
|
Paul Gauthier (aider): 26
|
||||||
|
aider/commands.py:
|
||||||
|
xqyz: 1
|
||||||
|
aider/io.py:
|
||||||
|
Paul Gauthier: 7
|
||||||
|
aider/main.py:
|
||||||
|
Paul Gauthier: 13
|
||||||
|
Paul Gauthier (aider): 15
|
||||||
|
aider/models.py:
|
||||||
|
Paul Gauthier: 8
|
||||||
|
Paul Gauthier (aider): 33
|
||||||
|
aider/sendchat.py:
|
||||||
|
Mir Adnan ALI: 28
|
||||||
|
Paul Gauthier: 11
|
||||||
|
Paul Gauthier (aider): 6
|
||||||
|
aider/urls.py:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
aider/website/_includes/leaderboard.js:
|
||||||
|
Paul Gauthier (aider): 1
|
||||||
|
aider/website/docs/leaderboards/index.md:
|
||||||
|
Paul Gauthier: 3
|
||||||
|
Paul Gauthier (aider): 2
|
||||||
|
benchmark/benchmark.py:
|
||||||
|
Paul Gauthier (aider): 21
|
||||||
|
benchmark/rsync.sh:
|
||||||
|
Paul Gauthier: 2
|
||||||
|
tests/basic/test_coder.py:
|
||||||
|
Paul Gauthier: 10
|
||||||
|
Paul Gauthier (aider): 39
|
||||||
|
tests/basic/test_main.py:
|
||||||
|
Paul Gauthier (aider): 62
|
||||||
|
tests/basic/test_sendchat.py:
|
||||||
|
Paul Gauthier (aider): 77
|
||||||
|
grand_total:
|
||||||
|
Mir Adnan ALI: 28
|
||||||
|
Paul Gauthier: 96
|
||||||
|
Paul Gauthier (aider): 284
|
||||||
|
xqyz: 1
|
||||||
|
start_tag: v0.72.0
|
||||||
|
total_lines: 409
|
||||||
|
- aider_percentage: 77.14
|
||||||
|
aider_total: 604
|
||||||
|
end_date: '2025-02-06'
|
||||||
|
end_tag: v0.74.0
|
||||||
|
file_counts:
|
||||||
|
aider/__init__.py:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
aider/args.py:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
aider/coders/base_coder.py:
|
||||||
|
Paul Gauthier: 24
|
||||||
|
Paul Gauthier (aider): 9
|
||||||
|
aider/coders/editblock_coder.py:
|
||||||
|
Paul Gauthier: 5
|
||||||
|
aider/coders/wholefile_coder.py:
|
||||||
|
Paul Gauthier: 2
|
||||||
|
aider/commands.py:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
aider/exceptions.py:
|
||||||
|
Paul Gauthier: 4
|
||||||
|
Paul Gauthier (aider): 6
|
||||||
|
aider/history.py:
|
||||||
|
Paul Gauthier (aider): 1
|
||||||
|
aider/io.py:
|
||||||
|
Paul Gauthier: 4
|
||||||
|
Paul Gauthier (aider): 18
|
||||||
|
aider/llm.py:
|
||||||
|
Paul Gauthier: 3
|
||||||
|
aider/main.py:
|
||||||
|
Paul Gauthier: 21
|
||||||
|
Paul Gauthier (aider): 25
|
||||||
|
aider/models.py:
|
||||||
|
Paul Gauthier: 83
|
||||||
|
Paul Gauthier (aider): 77
|
||||||
|
aider/repo.py:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
Paul Gauthier (aider): 2
|
||||||
|
"Viktor Sz\xE9pe": 3
|
||||||
|
aider/watch.py:
|
||||||
|
Paul Gauthier (aider): 45
|
||||||
|
benchmark/docker.sh:
|
||||||
|
Paul Gauthier: 2
|
||||||
|
docker/Dockerfile:
|
||||||
|
Paul Gauthier: 5
|
||||||
|
Paul Gauthier (aider): 4
|
||||||
|
tests/basic/test_editblock.py:
|
||||||
|
Paul Gauthier: 7
|
||||||
|
tests/basic/test_history.py:
|
||||||
|
Paul Gauthier (aider): 13
|
||||||
|
tests/basic/test_io.py:
|
||||||
|
Paul Gauthier (aider): 46
|
||||||
|
tests/basic/test_main.py:
|
||||||
|
Paul Gauthier: 8
|
||||||
|
Paul Gauthier (aider): 1
|
||||||
|
tests/basic/test_models.py:
|
||||||
|
Paul Gauthier (aider): 297
|
||||||
|
tests/basic/test_repo.py:
|
||||||
|
Paul Gauthier (aider): 11
|
||||||
|
tests/basic/test_sendchat.py:
|
||||||
|
Paul Gauthier (aider): 7
|
||||||
|
tests/basic/test_watch.py:
|
||||||
|
Paul Gauthier: 4
|
||||||
|
Paul Gauthier (aider): 42
|
||||||
|
grand_total:
|
||||||
|
Paul Gauthier: 176
|
||||||
|
Paul Gauthier (aider): 604
|
||||||
|
"Viktor Sz\xE9pe": 3
|
||||||
|
start_tag: v0.73.0
|
||||||
|
total_lines: 783
|
||||||
|
- aider_percentage: 46.31
|
||||||
|
aider_total: 163
|
||||||
|
end_date: '2025-02-24'
|
||||||
|
end_tag: v0.75.0
|
||||||
|
file_counts:
|
||||||
|
aider/__init__.py:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
aider/args.py:
|
||||||
|
Paul Gauthier: 7
|
||||||
|
aider/coders/base_coder.py:
|
||||||
|
Paul Gauthier: 12
|
||||||
|
Paul Gauthier (aider): 4
|
||||||
|
aider/commands.py:
|
||||||
|
FeepingCreature (aider): 6
|
||||||
|
aider/editor.py:
|
||||||
|
Paul Gauthier: 7
|
||||||
|
Paul Gauthier (aider): 5
|
||||||
|
aider/io.py:
|
||||||
|
Paul Gauthier: 3
|
||||||
|
Paul Gauthier (aider): 4
|
||||||
|
aider/linter.py:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
aider/main.py:
|
||||||
|
Paul Gauthier: 16
|
||||||
|
aider/models.py:
|
||||||
|
Paul Gauthier: 4
|
||||||
|
aider/queries/tree-sitter-language-pack/javascript-tags.scm:
|
||||||
|
Paul Gauthier: 5
|
||||||
|
aider/queries/tree-sitter-languages/hcl-tags.scm:
|
||||||
|
Paul Gauthier: 3
|
||||||
|
Warren Krewenki: 74
|
||||||
|
aider/queries/tree-sitter-languages/javascript-tags.scm:
|
||||||
|
Paul Gauthier: 5
|
||||||
|
aider/repomap.py:
|
||||||
|
Paul Gauthier: 43
|
||||||
|
Paul Gauthier (aider): 11
|
||||||
|
aider/special.py:
|
||||||
|
Lucas Shadler: 1
|
||||||
|
aider/website/docs/leaderboards/index.md:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
benchmark/Dockerfile:
|
||||||
|
Paul Gauthier (aider): 1
|
||||||
|
benchmark/benchmark.py:
|
||||||
|
Paul Gauthier: 4
|
||||||
|
benchmark/cpp-test.sh:
|
||||||
|
Paul Gauthier: 1
|
||||||
|
scripts/blame.py:
|
||||||
|
Paul Gauthier (aider): 2
|
||||||
|
scripts/issues.py:
|
||||||
|
Paul Gauthier (aider): 17
|
||||||
|
tests/basic/test_coder.py:
|
||||||
|
Paul Gauthier (aider): 18
|
||||||
|
tests/basic/test_editor.py:
|
||||||
|
Antti Kaihola: 1
|
||||||
|
Paul Gauthier (aider): 41
|
||||||
|
tests/basic/test_models.py:
|
||||||
|
Paul Gauthier (aider): 1
|
||||||
|
tests/basic/test_repomap.py:
|
||||||
|
Paul Gauthier (aider): 1
|
||||||
|
tests/fixtures/languages/hcl/test.tf:
|
||||||
|
Paul Gauthier (aider): 52
|
||||||
|
grand_total:
|
||||||
|
Antti Kaihola: 1
|
||||||
|
FeepingCreature (aider): 6
|
||||||
|
Lucas Shadler: 1
|
||||||
|
Paul Gauthier: 113
|
||||||
|
Paul Gauthier (aider): 157
|
||||||
|
Warren Krewenki: 74
|
||||||
|
start_tag: v0.74.0
|
||||||
|
total_lines: 352
|
||||||
|
|||||||
130
aider/website/_data/deepseek-down.yml
Normal file
130
aider/website/_data/deepseek-down.yml
Normal file
@@ -0,0 +1,130 @@
|
|||||||
|
- dirname: 2024-12-25-13-31-51--deepseekv3preview-diff2
|
||||||
|
test_cases: 225
|
||||||
|
model: DeepSeek
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: 0a23c4a-dirty
|
||||||
|
pass_rate_1: 22.7
|
||||||
|
pass_rate_2: 48.4
|
||||||
|
pass_num_1: 51
|
||||||
|
pass_num_2: 109
|
||||||
|
percent_cases_well_formed: 98.7
|
||||||
|
error_outputs: 7
|
||||||
|
num_malformed_responses: 7
|
||||||
|
num_with_malformed_responses: 3
|
||||||
|
user_asks: 19
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
test_timeouts: 8
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model deepseek/deepseek-chat
|
||||||
|
date: 2024-12-25
|
||||||
|
versions: 0.69.2.dev
|
||||||
|
seconds_per_case: 34.8
|
||||||
|
total_cost: 0.3369
|
||||||
|
|
||||||
|
|
||||||
|
- dirname: 2025-01-28-17-47-49--v3-fireworks
|
||||||
|
test_cases: 225
|
||||||
|
model: Fireworks
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: 0336a98-dirty
|
||||||
|
pass_rate_1: 22.2
|
||||||
|
pass_rate_2: 48.4
|
||||||
|
pass_num_1: 50
|
||||||
|
pass_num_2: 109
|
||||||
|
percent_cases_well_formed: 96.9
|
||||||
|
error_outputs: 18
|
||||||
|
num_malformed_responses: 16
|
||||||
|
num_with_malformed_responses: 7
|
||||||
|
user_asks: 14
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 2
|
||||||
|
test_timeouts: 9
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model fireworks_ai/accounts/fireworks/models/deepseek-v3
|
||||||
|
date: 2025-01-28
|
||||||
|
versions: 0.72.4.dev
|
||||||
|
seconds_per_case: 115.9
|
||||||
|
total_cost: 2.1177
|
||||||
|
|
||||||
|
- dirname: 2025-01-28-19-25-32--or-v3-deepinfra-diff
|
||||||
|
test_cases: 222
|
||||||
|
model: "OpenRouter: DeepInfra"
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: bfc5745, 77d2bc5-dirty
|
||||||
|
pass_rate_1: 23.9
|
||||||
|
pass_rate_2: 48.0
|
||||||
|
pass_num_1: 53
|
||||||
|
pass_num_2: 108
|
||||||
|
percent_cases_well_formed: 99.5
|
||||||
|
error_outputs: 18
|
||||||
|
num_malformed_responses: 1
|
||||||
|
num_with_malformed_responses: 1
|
||||||
|
user_asks: 17
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 2
|
||||||
|
test_timeouts: 4
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model openrouter/deepseek/deepseek-chat
|
||||||
|
date: 2025-01-28
|
||||||
|
versions: 0.72.4.dev
|
||||||
|
seconds_per_case: 187.0
|
||||||
|
total_cost: 0.2733
|
||||||
|
|
||||||
|
- dirname: 2025-01-28-21-07-23--or-v3-novita-diff
|
||||||
|
test_cases: 225
|
||||||
|
model: "OpenRouter: Novita"
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: 66025a0
|
||||||
|
pass_rate_1: 20.4
|
||||||
|
pass_rate_2: 42.7
|
||||||
|
pass_num_1: 46
|
||||||
|
pass_num_2: 96
|
||||||
|
percent_cases_well_formed: 84.0
|
||||||
|
error_outputs: 265
|
||||||
|
num_malformed_responses: 67
|
||||||
|
num_with_malformed_responses: 36
|
||||||
|
user_asks: 5
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
test_timeouts: 8
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model openrouter/deepseek/deepseek-chat
|
||||||
|
date: 2025-01-28
|
||||||
|
versions: 0.72.4.dev
|
||||||
|
seconds_per_case: 472.5
|
||||||
|
total_cost: 0.0000
|
||||||
|
|
||||||
|
- dirname: 2025-01-29-00-36-49--v3-hyperolic-diff
|
||||||
|
test_cases: 224
|
||||||
|
model: Hyperbolic
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: 298f713
|
||||||
|
pass_rate_1: 20.5
|
||||||
|
pass_rate_2: 48.4
|
||||||
|
pass_num_1: 46
|
||||||
|
pass_num_2: 109
|
||||||
|
percent_cases_well_formed: 97.3
|
||||||
|
error_outputs: 29
|
||||||
|
num_malformed_responses: 6
|
||||||
|
num_with_malformed_responses: 6
|
||||||
|
user_asks: 7
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
test_timeouts: 7
|
||||||
|
total_tests: 225
|
||||||
|
command: OPENAI_API_BASE=https://api.hyperbolic.xyz/v1/ aider --model openai/deepseek-ai/DeepSeek-V3
|
||||||
|
date: 2025-01-29
|
||||||
|
versions: 0.72.4.dev
|
||||||
|
seconds_per_case: 365.4
|
||||||
|
total_cost: 0.0000
|
||||||
@@ -1,3 +1,29 @@
|
|||||||
|
- dirname: 2025-02-25-20-23-07--gemini-pro
|
||||||
|
test_cases: 225
|
||||||
|
model: gemini/gemini-2.0-pro-exp-02-05
|
||||||
|
edit_format: whole
|
||||||
|
commit_hash: 2fccd47
|
||||||
|
pass_rate_1: 20.4
|
||||||
|
pass_rate_2: 35.6
|
||||||
|
pass_num_1: 46
|
||||||
|
pass_num_2: 80
|
||||||
|
percent_cases_well_formed: 100.0
|
||||||
|
error_outputs: 430
|
||||||
|
num_malformed_responses: 0
|
||||||
|
num_with_malformed_responses: 0
|
||||||
|
user_asks: 13
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
test_timeouts: 5
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model gemini/gemini-2.0-pro-exp-02-05
|
||||||
|
date: 2025-02-25
|
||||||
|
versions: 0.75.2.dev
|
||||||
|
seconds_per_case: 34.8
|
||||||
|
total_cost: 0.0000
|
||||||
|
|
||||||
- dirname: 2024-12-21-18-41-18--polyglot-gpt-4o-mini
|
- dirname: 2024-12-21-18-41-18--polyglot-gpt-4o-mini
|
||||||
test_cases: 225
|
test_cases: 225
|
||||||
model: gpt-4o-mini-2024-07-18
|
model: gpt-4o-mini-2024-07-18
|
||||||
@@ -24,58 +50,84 @@
|
|||||||
seconds_per_case: 17.3
|
seconds_per_case: 17.3
|
||||||
total_cost: 0.3236
|
total_cost: 0.3236
|
||||||
|
|
||||||
- dirname: 2024-12-21-18-44-28--polyglot-sonnet
|
- dirname: 2025-01-17-19-44-33--sonnet-baseline-jan-17
|
||||||
test_cases: 225
|
test_cases: 225
|
||||||
model: claude-3-5-sonnet-20241022
|
model: claude-3-5-sonnet-20241022
|
||||||
edit_format: diff
|
edit_format: diff
|
||||||
commit_hash: a755079-dirty
|
commit_hash: 6451d59
|
||||||
pass_rate_1: 18.7
|
pass_rate_1: 22.2
|
||||||
pass_rate_2: 45.3
|
pass_rate_2: 51.6
|
||||||
pass_num_1: 42
|
pass_num_1: 50
|
||||||
pass_num_2: 102
|
pass_num_2: 116
|
||||||
percent_cases_well_formed: 100.0
|
percent_cases_well_formed: 99.6
|
||||||
error_outputs: 1
|
error_outputs: 2
|
||||||
num_malformed_responses: 0
|
num_malformed_responses: 1
|
||||||
num_with_malformed_responses: 0
|
num_with_malformed_responses: 1
|
||||||
user_asks: 14
|
user_asks: 11
|
||||||
lazy_comments: 0
|
lazy_comments: 0
|
||||||
syntax_errors: 0
|
syntax_errors: 0
|
||||||
indentation_errors: 0
|
indentation_errors: 0
|
||||||
exhausted_context_windows: 1
|
exhausted_context_windows: 1
|
||||||
test_timeouts: 12
|
test_timeouts: 8
|
||||||
total_tests: 225
|
total_tests: 225
|
||||||
command: aider --model claude-3-5-sonnet-20241022
|
command: aider --model claude-3-5-sonnet-20241022
|
||||||
date: 2024-12-21
|
date: 2025-01-17
|
||||||
versions: 0.69.2.dev
|
versions: 0.71.2.dev
|
||||||
seconds_per_case: 30.8
|
seconds_per_case: 21.4
|
||||||
total_cost: 13.4847
|
total_cost: 14.4063
|
||||||
|
|
||||||
- dirname: 2024-12-21-18-52-34--polyglot-gpt-4o-diff
|
- dirname: 2024-12-30-20-57-12--gpt-4o-2024-11-20-ex-as-sys
|
||||||
test_cases: 225
|
test_cases: 225
|
||||||
model: gpt-4o-2024-11-20
|
model: gpt-4o-2024-11-20
|
||||||
edit_format: diff
|
edit_format: diff
|
||||||
commit_hash: a755079-dirty
|
commit_hash: 09ee197-dirty
|
||||||
pass_rate_1: 4.9
|
pass_rate_1: 4.9
|
||||||
pass_rate_2: 15.1
|
pass_rate_2: 18.2
|
||||||
pass_num_1: 11
|
pass_num_1: 11
|
||||||
pass_num_2: 34
|
pass_num_2: 41
|
||||||
percent_cases_well_formed: 96.0
|
percent_cases_well_formed: 95.1
|
||||||
error_outputs: 12
|
error_outputs: 12
|
||||||
num_malformed_responses: 11
|
num_malformed_responses: 12
|
||||||
num_with_malformed_responses: 9
|
num_with_malformed_responses: 11
|
||||||
user_asks: 34
|
user_asks: 53
|
||||||
lazy_comments: 0
|
lazy_comments: 0
|
||||||
syntax_errors: 0
|
syntax_errors: 0
|
||||||
indentation_errors: 0
|
indentation_errors: 0
|
||||||
exhausted_context_windows: 1
|
exhausted_context_windows: 0
|
||||||
test_timeouts: 19
|
test_timeouts: 12
|
||||||
total_tests: 225
|
total_tests: 225
|
||||||
command: aider --model gpt-4o-2024-11-20
|
command: aider --model gpt-4o-2024-11-20
|
||||||
date: 2024-12-21
|
date: 2024-12-30
|
||||||
versions: 0.69.2.dev
|
versions: 0.70.1.dev
|
||||||
seconds_per_case: 22.2
|
seconds_per_case: 12.1
|
||||||
total_cost: 7.1835
|
total_cost: 6.7351
|
||||||
|
|
||||||
|
- dirname: 2024-12-30-20-44-54--gpt4o-ex-as-sys-clean-prompt
|
||||||
|
test_cases: 225
|
||||||
|
model: gpt-4o-2024-08-06
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: 09ee197-dirty
|
||||||
|
pass_rate_1: 4.9
|
||||||
|
pass_rate_2: 23.1
|
||||||
|
pass_num_1: 11
|
||||||
|
pass_num_2: 52
|
||||||
|
percent_cases_well_formed: 94.2
|
||||||
|
error_outputs: 21
|
||||||
|
num_malformed_responses: 21
|
||||||
|
num_with_malformed_responses: 13
|
||||||
|
user_asks: 65
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
test_timeouts: 3
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model gpt-4o-2024-08-06
|
||||||
|
date: 2024-12-30
|
||||||
|
versions: 0.70.1.dev
|
||||||
|
seconds_per_case: 16.0
|
||||||
|
total_cost: 7.0286
|
||||||
|
|
||||||
- dirname: 2024-12-21-19-23-03--polyglot-o1-hard-diff
|
- dirname: 2024-12-21-19-23-03--polyglot-o1-hard-diff
|
||||||
test_cases: 224
|
test_cases: 224
|
||||||
model: o1-2024-12-17 (high)
|
model: o1-2024-12-17 (high)
|
||||||
@@ -100,7 +152,7 @@
|
|||||||
date: 2024-12-21
|
date: 2024-12-21
|
||||||
versions: 0.69.2.dev
|
versions: 0.69.2.dev
|
||||||
seconds_per_case: 133.2
|
seconds_per_case: 133.2
|
||||||
total_cost: 0.0000
|
total_cost: 186.4958
|
||||||
|
|
||||||
- dirname: 2024-12-21-20-56-21--polyglot-deepseek-diff
|
- dirname: 2024-12-21-20-56-21--polyglot-deepseek-diff
|
||||||
test_cases: 225
|
test_cases: 225
|
||||||
@@ -312,7 +364,7 @@
|
|||||||
|
|
||||||
- dirname: 2024-12-26-00-55-20--Qwen2.5-Coder-32B-Instruct
|
- dirname: 2024-12-26-00-55-20--Qwen2.5-Coder-32B-Instruct
|
||||||
test_cases: 225
|
test_cases: 225
|
||||||
model: openai/Qwen2.5-Coder-32B-Instruct
|
model: Qwen2.5-Coder-32B-Instruct
|
||||||
edit_format: whole
|
edit_format: whole
|
||||||
commit_hash: b51768b0
|
commit_hash: b51768b0
|
||||||
pass_rate_1: 4.9
|
pass_rate_1: 4.9
|
||||||
@@ -336,3 +388,343 @@
|
|||||||
seconds_per_case: 42.0
|
seconds_per_case: 42.0
|
||||||
total_cost: 0.0000
|
total_cost: 0.0000
|
||||||
|
|
||||||
|
- dirname: 2025-01-13-18-17-25--codestral-whole2
|
||||||
|
test_cases: 225
|
||||||
|
model: Codestral 25.01
|
||||||
|
edit_format: whole
|
||||||
|
commit_hash: 0cba898-dirty
|
||||||
|
pass_rate_1: 4.0
|
||||||
|
pass_rate_2: 11.1
|
||||||
|
pass_num_1: 9
|
||||||
|
pass_num_2: 25
|
||||||
|
percent_cases_well_formed: 100.0
|
||||||
|
error_outputs: 0
|
||||||
|
num_malformed_responses: 0
|
||||||
|
num_with_malformed_responses: 0
|
||||||
|
user_asks: 47
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
test_timeouts: 4
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model mistral/codestral-latest
|
||||||
|
date: 2025-01-13
|
||||||
|
versions: 0.71.2.dev
|
||||||
|
seconds_per_case: 9.3
|
||||||
|
total_cost: 1.9834
|
||||||
|
|
||||||
|
- dirname: 2025-01-20-19-11-38--ds-turns-upd-cur-msgs-fix-with-summarizer
|
||||||
|
test_cases: 225
|
||||||
|
model: DeepSeek R1
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: 5650697-dirty
|
||||||
|
pass_rate_1: 26.7
|
||||||
|
pass_rate_2: 56.9
|
||||||
|
pass_num_1: 60
|
||||||
|
pass_num_2: 128
|
||||||
|
percent_cases_well_formed: 96.9
|
||||||
|
error_outputs: 8
|
||||||
|
num_malformed_responses: 7
|
||||||
|
num_with_malformed_responses: 7
|
||||||
|
user_asks: 15
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 1
|
||||||
|
test_timeouts: 5
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model deepseek/deepseek-reasoner
|
||||||
|
date: 2025-01-20
|
||||||
|
versions: 0.71.2.dev
|
||||||
|
seconds_per_case: 113.7
|
||||||
|
total_cost: 5.4193
|
||||||
|
|
||||||
|
- dirname: 2025-01-23-19-14-48--r1-architect-sonnet
|
||||||
|
test_cases: 225
|
||||||
|
model: DeepSeek R1 + claude-3-5-sonnet-20241022
|
||||||
|
edit_format: architect
|
||||||
|
commit_hash: 05a77c7
|
||||||
|
editor_model: claude-3-5-sonnet-20241022
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
pass_rate_1: 27.1
|
||||||
|
pass_rate_2: 64.0
|
||||||
|
pass_num_1: 61
|
||||||
|
pass_num_2: 144
|
||||||
|
percent_cases_well_formed: 100.0
|
||||||
|
error_outputs: 2
|
||||||
|
num_malformed_responses: 0
|
||||||
|
num_with_malformed_responses: 0
|
||||||
|
user_asks: 392
|
||||||
|
lazy_comments: 6
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
test_timeouts: 5
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --architect --model r1 --editor-model sonnet
|
||||||
|
date: 2025-01-23
|
||||||
|
versions: 0.72.3.dev
|
||||||
|
seconds_per_case: 251.6
|
||||||
|
total_cost: 13.2933
|
||||||
|
|
||||||
|
- dirname: 2025-01-28-16-00-03--qwen-max-2025-01-25-polyglot-diff
|
||||||
|
test_cases: 225
|
||||||
|
model: qwen-max-2025-01-25
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: ae7d459
|
||||||
|
pass_rate_1: 9.3
|
||||||
|
pass_rate_2: 21.8
|
||||||
|
pass_num_1: 21
|
||||||
|
pass_num_2: 49
|
||||||
|
percent_cases_well_formed: 90.2
|
||||||
|
error_outputs: 46
|
||||||
|
num_malformed_responses: 44
|
||||||
|
num_with_malformed_responses: 22
|
||||||
|
user_asks: 23
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
test_timeouts: 9
|
||||||
|
total_tests: 225
|
||||||
|
command: OPENAI_API_BASE=https://dashscope-intl.aliyuncs.com/compatible-mode/v1 aider --model openai/qwen-max-2025-01-25
|
||||||
|
date: 2025-01-28
|
||||||
|
versions: 0.72.4.dev
|
||||||
|
seconds_per_case: 39.5
|
||||||
|
|
||||||
|
- dirname: 2025-01-31-20-27-46--o3-mini-diff2
|
||||||
|
test_cases: 225
|
||||||
|
model: o3-mini (medium)
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: 2fb517b-dirty
|
||||||
|
pass_rate_1: 19.1
|
||||||
|
pass_rate_2: 53.8
|
||||||
|
pass_num_1: 43
|
||||||
|
pass_num_2: 121
|
||||||
|
percent_cases_well_formed: 95.1
|
||||||
|
error_outputs: 28
|
||||||
|
num_malformed_responses: 28
|
||||||
|
num_with_malformed_responses: 11
|
||||||
|
user_asks: 17
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
test_timeouts: 2
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model o3-mini
|
||||||
|
date: 2025-01-31
|
||||||
|
versions: 0.72.4.dev
|
||||||
|
seconds_per_case: 47.2
|
||||||
|
total_cost: 8.8599
|
||||||
|
|
||||||
|
- dirname: 2025-01-31-20-42-47--o3-mini-diff-high
|
||||||
|
test_cases: 224
|
||||||
|
model: o3-mini (high)
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: b0d58d1-dirty
|
||||||
|
pass_rate_1: 21.0
|
||||||
|
pass_rate_2: 60.4
|
||||||
|
pass_num_1: 47
|
||||||
|
pass_num_2: 136
|
||||||
|
percent_cases_well_formed: 93.3
|
||||||
|
error_outputs: 26
|
||||||
|
num_malformed_responses: 24
|
||||||
|
num_with_malformed_responses: 15
|
||||||
|
user_asks: 19
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 1
|
||||||
|
test_timeouts: 7
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model o3-mini --reasoning-effort high
|
||||||
|
date: 2025-01-31
|
||||||
|
versions: 0.72.4.dev
|
||||||
|
seconds_per_case: 124.6
|
||||||
|
total_cost: 18.1584
|
||||||
|
|
||||||
|
- dirname: 2025-01-21-22-51-49--gemini-2.0-flash-thinking-exp-01-21-polyglot-diff
|
||||||
|
test_cases: 225
|
||||||
|
model: gemini-2.0-flash-thinking-exp-01-21
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: 843720a
|
||||||
|
pass_rate_1: 5.8
|
||||||
|
pass_rate_2: 18.2
|
||||||
|
pass_num_1: 13
|
||||||
|
pass_num_2: 41
|
||||||
|
percent_cases_well_formed: 77.8
|
||||||
|
error_outputs: 182
|
||||||
|
num_malformed_responses: 180
|
||||||
|
num_with_malformed_responses: 50
|
||||||
|
user_asks: 26
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 2
|
||||||
|
test_timeouts: 7
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model gemini/gemini-2.0-flash-thinking-exp-01-21
|
||||||
|
date: 2025-01-21
|
||||||
|
versions: 0.72.2.dev
|
||||||
|
seconds_per_case: 24.2
|
||||||
|
total_cost: 0.0000
|
||||||
|
|
||||||
|
- dirname: 2025-02-15-19-51-22--chatgpt4o-feb15-diff
|
||||||
|
test_cases: 223
|
||||||
|
model: chatgpt-4o-latest (2025-02-15)
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: 108ce18-dirty
|
||||||
|
pass_rate_1: 9.0
|
||||||
|
pass_rate_2: 27.1
|
||||||
|
pass_num_1: 20
|
||||||
|
pass_num_2: 61
|
||||||
|
percent_cases_well_formed: 93.3
|
||||||
|
error_outputs: 66
|
||||||
|
num_malformed_responses: 21
|
||||||
|
num_with_malformed_responses: 15
|
||||||
|
user_asks: 57
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
test_timeouts: 2
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model chatgpt-4o-latest
|
||||||
|
date: 2025-02-15
|
||||||
|
versions: 0.74.3.dev
|
||||||
|
seconds_per_case: 12.4
|
||||||
|
total_cost: 14.3703
|
||||||
|
|
||||||
|
- dirname: 2025-02-24-19-54-07--sonnet37-diff
|
||||||
|
test_cases: 225
|
||||||
|
model: claude-3-7-sonnet-20250219 (no thinking)
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: 75e9ee6
|
||||||
|
pass_rate_1: 24.4
|
||||||
|
pass_rate_2: 60.4
|
||||||
|
pass_num_1: 55
|
||||||
|
pass_num_2: 136
|
||||||
|
percent_cases_well_formed: 93.3
|
||||||
|
error_outputs: 16
|
||||||
|
num_malformed_responses: 16
|
||||||
|
num_with_malformed_responses: 15
|
||||||
|
user_asks: 12
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
test_timeouts: 0
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model sonnet
|
||||||
|
date: 2025-02-24
|
||||||
|
versions: 0.74.4.dev
|
||||||
|
seconds_per_case: 28.3
|
||||||
|
total_cost: 17.7191
|
||||||
|
|
||||||
|
- dirname: 2025-02-24-21-47-23--sonnet37-diff-think-32k-64k
|
||||||
|
test_cases: 225
|
||||||
|
model: claude-3-7-sonnet-20250219 (32k thinking tokens)
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: 60d11a6, 93edbda
|
||||||
|
pass_rate_1: 29.3
|
||||||
|
pass_rate_2: 64.9
|
||||||
|
pass_num_1: 66
|
||||||
|
pass_num_2: 146
|
||||||
|
percent_cases_well_formed: 97.8
|
||||||
|
error_outputs: 66
|
||||||
|
num_malformed_responses: 5
|
||||||
|
num_with_malformed_responses: 5
|
||||||
|
user_asks: 5
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
test_timeouts: 1
|
||||||
|
total_tests: 225
|
||||||
|
command: "aider --model anthropic/claude-3-7-sonnet-20250219 # plus yml config"
|
||||||
|
date: 2025-02-24
|
||||||
|
versions: 0.75.1.dev
|
||||||
|
seconds_per_case: 105.2
|
||||||
|
total_cost: 36.8343
|
||||||
|
|
||||||
|
- dirname: 2025-02-27-20-26-15--gpt45-diff3
|
||||||
|
test_cases: 224
|
||||||
|
model: gpt-4.5-preview
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: b462e55-dirty
|
||||||
|
pass_rate_1: 22.3
|
||||||
|
pass_rate_2: 44.9
|
||||||
|
pass_num_1: 50
|
||||||
|
pass_num_2: 101
|
||||||
|
percent_cases_well_formed: 97.3
|
||||||
|
error_outputs: 10
|
||||||
|
num_malformed_responses: 8
|
||||||
|
num_with_malformed_responses: 6
|
||||||
|
user_asks: 15
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 1
|
||||||
|
test_timeouts: 2
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model openai/gpt-4.5-preview
|
||||||
|
date: 2025-02-27
|
||||||
|
versions: 0.75.2.dev
|
||||||
|
seconds_per_case: 113.5
|
||||||
|
total_cost: 183.1802
|
||||||
|
|
||||||
|
- dirname: 2025-03-06-17-40-24--qwq32b-diff-temp-topp-ex-sys-remind-user-for-real
|
||||||
|
test_cases: 225
|
||||||
|
model: QwQ-32B
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: 51d118f-dirty
|
||||||
|
pass_rate_1: 8.0
|
||||||
|
pass_rate_2: 20.9
|
||||||
|
pass_num_1: 18
|
||||||
|
pass_num_2: 47
|
||||||
|
percent_cases_well_formed: 67.6
|
||||||
|
error_outputs: 145
|
||||||
|
num_malformed_responses: 143
|
||||||
|
num_with_malformed_responses: 73
|
||||||
|
user_asks: 17
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 1
|
||||||
|
test_timeouts: 4
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model fireworks_ai/accounts/fireworks/models/qwq-32b
|
||||||
|
date: 2025-03-06
|
||||||
|
versions: 0.75.3.dev
|
||||||
|
seconds_per_case: 228.6
|
||||||
|
total_cost: 0.0000
|
||||||
|
|
||||||
|
- dirname: 2025-03-07-15-11-27--qwq32b-arch-temp-topp-again
|
||||||
|
test_cases: 225
|
||||||
|
model: QwQ-32B + Qwen 2.5 Coder Instruct
|
||||||
|
edit_format: architect
|
||||||
|
commit_hash: 52162a5
|
||||||
|
editor_model: fireworks_ai/accounts/fireworks/models/qwen2p5-coder-32b-instruct
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
pass_rate_1: 9.8
|
||||||
|
pass_rate_2: 26.2
|
||||||
|
pass_num_1: 22
|
||||||
|
pass_num_2: 59
|
||||||
|
percent_cases_well_formed: 100.0
|
||||||
|
error_outputs: 122
|
||||||
|
num_malformed_responses: 0
|
||||||
|
num_with_malformed_responses: 0
|
||||||
|
user_asks: 489
|
||||||
|
lazy_comments: 8
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 1
|
||||||
|
test_timeouts: 2
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model fireworks_ai/accounts/fireworks/models/qwq-32b --architect
|
||||||
|
date: 2025-03-07
|
||||||
|
versions: 0.75.3.dev
|
||||||
|
seconds_per_case: 137.4
|
||||||
|
total_cost: 0
|
||||||
138
aider/website/_data/r1_architect.yml
Normal file
138
aider/website/_data/r1_architect.yml
Normal file
@@ -0,0 +1,138 @@
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
- dirname: 2025-01-23-19-14-48--r1-architect-sonnet
|
||||||
|
test_cases: 225
|
||||||
|
model: R1+Sonnet
|
||||||
|
edit_format: architect
|
||||||
|
commit_hash: 05a77c7
|
||||||
|
editor_model: claude-3-5-sonnet-20241022
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
pass_rate_1: 27.1
|
||||||
|
pass_rate_2: 64.0
|
||||||
|
pass_num_1: 61
|
||||||
|
pass_num_2: 144
|
||||||
|
percent_cases_well_formed: 100.0
|
||||||
|
error_outputs: 2
|
||||||
|
num_malformed_responses: 0
|
||||||
|
num_with_malformed_responses: 0
|
||||||
|
user_asks: 392
|
||||||
|
lazy_comments: 6
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
test_timeouts: 5
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --architect --model r1 --editor-model sonnet
|
||||||
|
date: 2025-01-23
|
||||||
|
versions: 0.72.3.dev
|
||||||
|
seconds_per_case: 251.6
|
||||||
|
total_cost: 13.2933
|
||||||
|
|
||||||
|
- dirname: 2025-01-20-19-11-38--ds-turns-upd-cur-msgs-fix-with-summarizer
|
||||||
|
test_cases: 225
|
||||||
|
model: R1
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: 5650697-dirty
|
||||||
|
pass_rate_1: 26.7
|
||||||
|
pass_rate_2: 56.9
|
||||||
|
pass_num_1: 60
|
||||||
|
pass_num_2: 128
|
||||||
|
percent_cases_well_formed: 96.9
|
||||||
|
error_outputs: 8
|
||||||
|
num_malformed_responses: 7
|
||||||
|
num_with_malformed_responses: 7
|
||||||
|
user_asks: 15
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 1
|
||||||
|
test_timeouts: 5
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model r1
|
||||||
|
date: 2025-01-20
|
||||||
|
versions: 0.71.2.dev
|
||||||
|
seconds_per_case: 113.7
|
||||||
|
total_cost: 5.4193
|
||||||
|
|
||||||
|
|
||||||
|
- dirname: 2024-12-21-19-23-03--polyglot-o1-hard-diff
|
||||||
|
test_cases: 224
|
||||||
|
model: o1
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: a755079-dirty
|
||||||
|
pass_rate_1: 23.7
|
||||||
|
pass_rate_2: 61.7
|
||||||
|
pass_num_1: 53
|
||||||
|
pass_num_2: 139
|
||||||
|
percent_cases_well_formed: 91.5
|
||||||
|
error_outputs: 25
|
||||||
|
num_malformed_responses: 24
|
||||||
|
num_with_malformed_responses: 19
|
||||||
|
user_asks: 16
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
test_timeouts: 2
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model o1
|
||||||
|
date: 2024-12-21
|
||||||
|
versions: 0.69.2.dev
|
||||||
|
seconds_per_case: 133.2
|
||||||
|
total_cost: 186.4958
|
||||||
|
|
||||||
|
|
||||||
|
- dirname: 2024-12-25-13-31-51--deepseekv3preview-diff2
|
||||||
|
test_cases: 225
|
||||||
|
model: DeepSeek V3
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: 0a23c4a-dirty
|
||||||
|
pass_rate_1: 22.7
|
||||||
|
pass_rate_2: 48.4
|
||||||
|
pass_num_1: 51
|
||||||
|
pass_num_2: 109
|
||||||
|
percent_cases_well_formed: 98.7
|
||||||
|
error_outputs: 7
|
||||||
|
num_malformed_responses: 7
|
||||||
|
num_with_malformed_responses: 3
|
||||||
|
user_asks: 19
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 0
|
||||||
|
test_timeouts: 8
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model deepseek
|
||||||
|
date: 2024-12-25
|
||||||
|
versions: 0.69.2.dev
|
||||||
|
seconds_per_case: 34.8
|
||||||
|
total_cost: 0.3369
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
- dirname: 2025-01-17-19-44-33--sonnet-baseline-jan-17
|
||||||
|
test_cases: 225
|
||||||
|
model: Sonnet
|
||||||
|
edit_format: diff
|
||||||
|
commit_hash: 6451d59
|
||||||
|
pass_rate_1: 22.2
|
||||||
|
pass_rate_2: 51.6
|
||||||
|
pass_num_1: 50
|
||||||
|
pass_num_2: 116
|
||||||
|
percent_cases_well_formed: 99.6
|
||||||
|
error_outputs: 2
|
||||||
|
num_malformed_responses: 1
|
||||||
|
num_with_malformed_responses: 1
|
||||||
|
user_asks: 11
|
||||||
|
lazy_comments: 0
|
||||||
|
syntax_errors: 0
|
||||||
|
indentation_errors: 0
|
||||||
|
exhausted_context_windows: 1
|
||||||
|
test_timeouts: 8
|
||||||
|
total_tests: 225
|
||||||
|
command: aider --model sonnet
|
||||||
|
date: 2025-01-17
|
||||||
|
versions: 0.71.2.dev
|
||||||
|
seconds_per_case: 21.4
|
||||||
|
total_cost: 14.4063
|
||||||
@@ -8,9 +8,18 @@ aider-install
|
|||||||
# Change directory into your code base
|
# Change directory into your code base
|
||||||
cd /to/your/project
|
cd /to/your/project
|
||||||
|
|
||||||
# Work with Claude 3.5 Sonnet on your code
|
# Work with DeepSeek via DeepSeek's API
|
||||||
aider --model sonnet --anthropic-api-key your-key-goes-here
|
aider --model deepseek --api-key deepseek=your-key-goes-here
|
||||||
|
|
||||||
# Work with GPT-4o on your code
|
# Work with Claude 3.7 Sonnet via Anthropic's API
|
||||||
aider --model gpt-4o --openai-api-key your-key-goes-here
|
aider --model sonnet --api-key anthropic=your-key-goes-here
|
||||||
|
|
||||||
|
# Work with GPT-4o via OpenAI's API
|
||||||
|
aider --model gpt-4o --api-key openai=your-key-goes-here
|
||||||
|
|
||||||
|
# Work with Sonnet via OpenRouter's API
|
||||||
|
aider --model openrouter/anthropic/claude-3.7-sonnet --api-key openrouter=your-key-goes-here
|
||||||
|
|
||||||
|
# Work with DeepSeek via OpenRouter's API
|
||||||
|
aider --model openrouter/deepseek/deepseek-chat --api-key openrouter=your-key-goes-here
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -23,6 +23,16 @@ document.addEventListener('DOMContentLoaded', function () {
|
|||||||
return (label && label.includes(HIGHLIGHT_MODEL)) ? 'rgba(255, 99, 132, 1)' : 'rgba(54, 162, 235, 1)';
|
return (label && label.includes(HIGHLIGHT_MODEL)) ? 'rgba(255, 99, 132, 1)' : 'rgba(54, 162, 235, 1)';
|
||||||
},
|
},
|
||||||
borderWidth: 1
|
borderWidth: 1
|
||||||
|
}, {
|
||||||
|
label: 'Total Cost ($)',
|
||||||
|
data: [],
|
||||||
|
type: 'scatter',
|
||||||
|
yAxisID: 'y1',
|
||||||
|
backgroundColor: 'rgba(153, 102, 255, 1)',
|
||||||
|
borderColor: '#fff',
|
||||||
|
borderWidth: 1,
|
||||||
|
pointRadius: 5,
|
||||||
|
pointHoverRadius: 7
|
||||||
}]
|
}]
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -32,7 +42,8 @@ document.addEventListener('DOMContentLoaded', function () {
|
|||||||
model: '{{ row.model }}',
|
model: '{{ row.model }}',
|
||||||
pass_rate: {{ row[pass_rate_field] }},
|
pass_rate: {{ row[pass_rate_field] }},
|
||||||
percent_cases_well_formed: {{ row.percent_cases_well_formed }},
|
percent_cases_well_formed: {{ row.percent_cases_well_formed }},
|
||||||
edit_format: '{{ row.edit_format | default: "diff" }}'
|
edit_format: '{{ row.edit_format | default: "diff" }}',
|
||||||
|
total_cost: {{ row.total_cost | default: 0 }}
|
||||||
});
|
});
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
|
|
||||||
@@ -43,6 +54,7 @@ document.addEventListener('DOMContentLoaded', function () {
|
|||||||
displayedData = [];
|
displayedData = [];
|
||||||
leaderboardData.labels = [];
|
leaderboardData.labels = [];
|
||||||
leaderboardData.datasets[0].data = [];
|
leaderboardData.datasets[0].data = [];
|
||||||
|
leaderboardData.datasets[1].data = [];
|
||||||
|
|
||||||
allData.forEach(function(row, index) {
|
allData.forEach(function(row, index) {
|
||||||
var rowElement = document.getElementById('edit-row-' + index);
|
var rowElement = document.getElementById('edit-row-' + index);
|
||||||
@@ -53,6 +65,8 @@ document.addEventListener('DOMContentLoaded', function () {
|
|||||||
displayedData.push(row);
|
displayedData.push(row);
|
||||||
leaderboardData.labels.push(row.model);
|
leaderboardData.labels.push(row.model);
|
||||||
leaderboardData.datasets[0].data.push(row.pass_rate);
|
leaderboardData.datasets[0].data.push(row.pass_rate);
|
||||||
|
// Only include cost if it's not zero (placeholder for unknown)
|
||||||
|
leaderboardData.datasets[1].data.push(row.total_cost > 0 ? row.total_cost : null);
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -96,7 +110,7 @@ document.addEventListener('DOMContentLoaded', function () {
|
|||||||
options: {
|
options: {
|
||||||
plugins: {
|
plugins: {
|
||||||
legend: {
|
legend: {
|
||||||
display: true,
|
display: {% if show_legend == false %}false{% else %}true{% endif %},
|
||||||
labels: {
|
labels: {
|
||||||
generateLabels: function(chart) {
|
generateLabels: function(chart) {
|
||||||
return [
|
return [
|
||||||
@@ -111,10 +125,29 @@ document.addEventListener('DOMContentLoaded', function () {
|
|||||||
fillStyle: blueDiagonalPattern,
|
fillStyle: blueDiagonalPattern,
|
||||||
strokeStyle: 'rgba(54, 162, 235, 1)',
|
strokeStyle: 'rgba(54, 162, 235, 1)',
|
||||||
lineWidth: 1
|
lineWidth: 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
text: 'Total Cost ($)',
|
||||||
|
fillStyle: 'rgba(153, 102, 255, 1)',
|
||||||
|
strokeStyle: '#fff',
|
||||||
|
lineWidth: 1,
|
||||||
|
pointStyle: 'circle'
|
||||||
}
|
}
|
||||||
];
|
];
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
},
|
||||||
|
tooltip: {
|
||||||
|
callbacks: {
|
||||||
|
label: function(context) {
|
||||||
|
const datasetLabel = context.dataset.label || '';
|
||||||
|
const value = context.parsed.y;
|
||||||
|
if (datasetLabel === 'Total Cost ($)') {
|
||||||
|
return datasetLabel + ': $' + value.toFixed(2);
|
||||||
|
}
|
||||||
|
return datasetLabel + ': ' + value.toFixed(1) + '%';
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
scales: {
|
scales: {
|
||||||
@@ -125,6 +158,17 @@ document.addEventListener('DOMContentLoaded', function () {
|
|||||||
text: 'Percent completed correctly'
|
text: 'Percent completed correctly'
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
y1: {
|
||||||
|
beginAtZero: true,
|
||||||
|
position: 'right',
|
||||||
|
grid: {
|
||||||
|
drawOnChartArea: false
|
||||||
|
},
|
||||||
|
title: {
|
||||||
|
display: true,
|
||||||
|
text: 'Total Cost ($)'
|
||||||
|
}
|
||||||
|
},
|
||||||
x: {
|
x: {
|
||||||
ticks: {
|
ticks: {
|
||||||
callback: function(value, index) {
|
callback: function(value, index) {
|
||||||
@@ -173,6 +217,7 @@ document.addEventListener('DOMContentLoaded', function () {
|
|||||||
displayedData = [];
|
displayedData = [];
|
||||||
leaderboardData.labels = [];
|
leaderboardData.labels = [];
|
||||||
leaderboardData.datasets[0].data = [];
|
leaderboardData.datasets[0].data = [];
|
||||||
|
leaderboardData.datasets[1].data = [];
|
||||||
|
|
||||||
for (var i = 0; i < rows.length; i++) {
|
for (var i = 0; i < rows.length; i++) {
|
||||||
var rowText = rows[i].textContent;
|
var rowText = rows[i].textContent;
|
||||||
@@ -181,6 +226,8 @@ document.addEventListener('DOMContentLoaded', function () {
|
|||||||
displayedData.push(allData[i]);
|
displayedData.push(allData[i]);
|
||||||
leaderboardData.labels.push(allData[i].model);
|
leaderboardData.labels.push(allData[i].model);
|
||||||
leaderboardData.datasets[0].data.push(allData[i].pass_rate);
|
leaderboardData.datasets[0].data.push(allData[i].pass_rate);
|
||||||
|
// Only include cost if it's not zero (placeholder for unknown)
|
||||||
|
leaderboardData.datasets[1].data.push(allData[i].total_cost > 0 ? allData[i].total_cost : null);
|
||||||
} else {
|
} else {
|
||||||
rows[i].style.display = 'none';
|
rows[i].style.display = 'none';
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1 +1 @@
|
|||||||
Aider works best with Claude 3.5 Sonnet, DeepSeek V3, o1 & GPT-4o and can [connect to almost any LLM](https://aider.chat/docs/llms.html).
|
Aider works best with Claude 3.5 Sonnet, DeepSeek R1 & Chat V3, OpenAI o1, o3-mini & GPT-4o. Aider can [connect to almost any LLM, including local models](https://aider.chat/docs/llms.html).
|
||||||
|
|||||||
@@ -39,9 +39,7 @@ Aider will directly edit the code in your local source files,
|
|||||||
and [git commit the changes](https://aider.chat/docs/git.html)
|
and [git commit the changes](https://aider.chat/docs/git.html)
|
||||||
with sensible commit messages.
|
with sensible commit messages.
|
||||||
You can start a new project or work with an existing git repo.
|
You can start a new project or work with an existing git repo.
|
||||||
Aider works well with GPT 3.5, GPT-4, GPT-4 Turbo with Vision,
|
{% include works-best.md %}
|
||||||
and Claude 3 Opus.
|
|
||||||
It also supports [connecting to almost any LLM](https://aider.chat/docs/llms.html).
|
|
||||||
|
|
||||||
Use the `--browser` switch to launch the browser version of aider:
|
Use the `--browser` switch to launch the browser version of aider:
|
||||||
|
|
||||||
|
|||||||
102
aider/website/_posts/2025-01-15-uv.md
Normal file
102
aider/website/_posts/2025-01-15-uv.md
Normal file
@@ -0,0 +1,102 @@
|
|||||||
|
---
|
||||||
|
title: Using uv as an installer
|
||||||
|
excerpt: Reliably packaging & distributing python CLI tools is hard. Aider uses uv in novel ways to make it easy to install the aider CLI, its dependencies and python 3.12. All in an isolated env.
|
||||||
|
draft: false
|
||||||
|
nav_exclude: true
|
||||||
|
---
|
||||||
|
{% if page.date %}
|
||||||
|
<p class="post-date">{{ page.date | date: "%B %d, %Y" }}</p>
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
# Using uv as an installer
|
||||||
|
{: .no_toc }
|
||||||
|
|
||||||
|
It's hard to reliably
|
||||||
|
package and distribute python command line tools
|
||||||
|
to end users.
|
||||||
|
Users frequently encounter challenges:
|
||||||
|
dependency version conflicts, virtual environment management,
|
||||||
|
needing to install python or a specific version of python, etc.
|
||||||
|
|
||||||
|
Aider employs [uv](https://github.com/astral-sh/uv)
|
||||||
|
in a couple of novel ways to streamline the installation process:
|
||||||
|
|
||||||
|
1. Install aider with
|
||||||
|
`curl https://aider.chat/install.sh | sh` even if python isn't already installed.
|
||||||
|
|
||||||
|
2. Users who have python 3.8+ installed can `pip install aider-install && aider-install`.
|
||||||
|
|
||||||
|
Both methods use uv to **globally** install the `aider` command line program,
|
||||||
|
with all of its dependencies in an **isolated environment**.
|
||||||
|
They ensure that aider will run with **python 3.12**, and install that version
|
||||||
|
if it is not already available.
|
||||||
|
|
||||||
|
These uv install methods are especially helpful for aider, because it
|
||||||
|
has a large set of very specific dependencies.
|
||||||
|
Since not all of aider's dependencies are available on all python versions,
|
||||||
|
it requires python 3.9-3.12.
|
||||||
|
|
||||||
|
Most users don't want to worry about these details --
|
||||||
|
they just want a quick way to install and run aider.
|
||||||
|
|
||||||
|
|
||||||
|
## One-liners
|
||||||
|
|
||||||
|
Users can install aider with a shell one-liner, without even having python previously installed:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -LsSf https://aider.chat/install.sh | sh
|
||||||
|
```
|
||||||
|
|
||||||
|
This installs uv, then uses it to install python 3.12,
|
||||||
|
install the `aider` command line tool
|
||||||
|
and update the user's shell path.
|
||||||
|
Under the hood, it is simply a copy of
|
||||||
|
uv's own install script `https://astral.sh/uv/install.sh`
|
||||||
|
with [one line added](https://github.com/Aider-AI/aider/blob/4251e976b3aa52c2a3af08da4b203d4d524c8e92/aider/website/install.sh#L1181), to install aider as a tool:
|
||||||
|
|
||||||
|
```
|
||||||
|
ensure "${_install_dir}/uv" tool install --force --python python3.12 aider-chat@latest
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## aider-install
|
||||||
|
|
||||||
|
The aider-install python package allows quick global installation of aider
|
||||||
|
for users who already have python 3.8+ installed.
|
||||||
|
It simply provides the `aider-install` command line program,
|
||||||
|
which users just need to run once.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install aider-install
|
||||||
|
aider-install
|
||||||
|
```
|
||||||
|
|
||||||
|
The `pip install aider-install` installs only two packages:
|
||||||
|
aider-install and the [uv python package](https://pypi.org/project/uv/).
|
||||||
|
This ensures that uv is available
|
||||||
|
in the user's environment.
|
||||||
|
Everything else is installed in a stand-alone environment created by uv.
|
||||||
|
|
||||||
|
When the user runs `aider-install`, it runs uv
|
||||||
|
to install aider as a tool and update the user's shell path if needed:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
uv tool install --force --python python3.12 aider-chat
|
||||||
|
uv tool update-shell
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Benefits
|
||||||
|
|
||||||
|
These uv install methods have been popular with users,
|
||||||
|
providing a hassle free way to install aider and quickly get started.
|
||||||
|
Installs are also extremely fast, much faster than pip or pipx installs
|
||||||
|
even when uv is also installing python 3.12!
|
||||||
|
|
||||||
|
There are also a number of benefits from the perspective of the tool developer/publisher.
|
||||||
|
Since providing these install methods, far fewer users report dependency problems and
|
||||||
|
version conflicts as compared to users who `pip install aider-chat`.
|
||||||
|
There is also less pressure to rapidly support the newest python versions,
|
||||||
|
since aider always installs with python 3.12.
|
||||||
|
|
||||||
118
aider/website/_posts/2025-01-24-r1-sonnet.md
Normal file
118
aider/website/_posts/2025-01-24-r1-sonnet.md
Normal file
@@ -0,0 +1,118 @@
|
|||||||
|
---
|
||||||
|
title: R1+Sonnet set SOTA on aider's polyglot benchmark
|
||||||
|
excerpt: R1+Sonnet has set a new SOTA on the aider polyglot benchmark. At 14X less cost compared to o1.
|
||||||
|
highlight_image: /assets/r1-sonnet-sota.jpg
|
||||||
|
draft: false
|
||||||
|
nav_exclude: true
|
||||||
|
---
|
||||||
|
{% if page.date %}
|
||||||
|
<p class="post-date">{{ page.date | date: "%B %d, %Y" }}</p>
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
# R1+Sonnet set SOTA on aider's polyglot benchmark
|
||||||
|
{: .no_toc }
|
||||||
|
|
||||||
|
<canvas id="editChart" width="800" height="450" style="margin-top: 20px"></canvas>
|
||||||
|
|
||||||
|
Aider supports [using a pair of models for coding](https://aider.chat/2024/09/26/architect.html):
|
||||||
|
|
||||||
|
- An Architect model is asked to describe how to solve the coding problem. Thinking/reasoning models often work well in this role.
|
||||||
|
- An Editor model is given the Architect's solution and asked to produce specific code editing instructions to apply those changes to existing source files.
|
||||||
|
|
||||||
|
**R1 as architect with Sonnet as editor has set a new SOTA of 64.0%** on the
|
||||||
|
[aider polyglot benchmark](/2024/12/21/polyglot.html).
|
||||||
|
They achieve this at **14X less cost** compared to the previous o1 SOTA result.
|
||||||
|
|
||||||
|
o1 paired with Sonnet didn't produce better results than just using o1 alone.
|
||||||
|
Using various other models as editor didn't seem to improve o1 or R1 versus their solo scores.
|
||||||
|
This is in contrast to the first wave of thinking models like o1-preview and o1-mini,
|
||||||
|
which improved when paired with many different editor models.
|
||||||
|
|
||||||
|
o1 was set with reasoning effort high for these tests.
|
||||||
|
|
||||||
|
## Try it
|
||||||
|
|
||||||
|
Once you [install aider](https://aider.chat/docs/install.html),
|
||||||
|
you can use aider, R1 and Sonnet like this:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export DEEPSEEK_API_KEY=<your-key>
|
||||||
|
export ANTHROPIC_API_KEY=<your-key>
|
||||||
|
|
||||||
|
aider --architect --model r1 --editor-model sonnet
|
||||||
|
```
|
||||||
|
|
||||||
|
Or if you have an [OpenRouter](https://openrouter.ai) account:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export OPENROUTER_API_KEY=<your-key>
|
||||||
|
|
||||||
|
aider --architect --model openrouter/deepseek/deepseek-r1 --editor-model openrouter/anthropic/claude-3.5-sonnet
|
||||||
|
```
|
||||||
|
|
||||||
|
## Thinking output
|
||||||
|
|
||||||
|
There has been
|
||||||
|
[some recent discussion](https://github.com/Aider-AI/aider/pull/2973)
|
||||||
|
about extracting the `<think>` tokens from R1's responses
|
||||||
|
and feeding them to Sonnet.
|
||||||
|
That was an interesting experiment, for sure.
|
||||||
|
|
||||||
|
To be clear, the results above are *not* using R1's thinking tokens, just the normal
|
||||||
|
final output.
|
||||||
|
R1 is configured in aider's standard architect role with Sonnet as editor.
|
||||||
|
The benchmark results that used the thinking tokens appear to be worse than
|
||||||
|
the architect/editor results shared here.
|
||||||
|
|
||||||
|
## Results
|
||||||
|
|
||||||
|
<table style="width: 100%; max-width: 800px; margin: auto; border-collapse: collapse; box-shadow: 0 2px 4px rgba(0,0,0,0.1); font-size: 14px;">
|
||||||
|
<thead style="background-color: #f2f2f2;">
|
||||||
|
<tr>
|
||||||
|
<th style="padding: 8px; text-align: left;">Model</th>
|
||||||
|
<th style="padding: 8px; text-align: center;">Percent completed correctly</th>
|
||||||
|
<th style="padding: 8px; text-align: center;">Percent using correct edit format</th>
|
||||||
|
<th style="padding: 8px; text-align: left;">Command</th>
|
||||||
|
<th style="padding: 8px; text-align: center;">Edit format</th>
|
||||||
|
<th style="padding: 8px; text-align: center;">Total Cost</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
{% assign edit_sorted = site.data.r1_architect | sort: 'pass_rate_2' | reverse %}
|
||||||
|
{% for row in edit_sorted %}
|
||||||
|
<tr style="border-bottom: 1px solid #ddd;">
|
||||||
|
<td style="padding: 8px;">{{ row.model }}</td>
|
||||||
|
<td style="padding: 8px; text-align: center;">{{ row.pass_rate_2 }}%</td>
|
||||||
|
<td style="padding: 8px; text-align: center;">{{ row.percent_cases_well_formed }}%</td>
|
||||||
|
<td style="padding: 8px;"><code>{{ row.command }}</code></td>
|
||||||
|
<td style="padding: 8px; text-align: center;">{{ row.edit_format }}</td>
|
||||||
|
<td style="padding: 8px; text-align: center;">{% if row.total_cost == 0 %}?{% else %}${{ row.total_cost | times: 1.0 | round: 2 }}{% endif %}</td>
|
||||||
|
</tr>
|
||||||
|
{% endfor %}
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
<script src="https://unpkg.com/patternomaly/dist/patternomaly.js"></script>
|
||||||
|
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
|
||||||
|
<script>
|
||||||
|
{% assign data_source = edit_sorted %}
|
||||||
|
{% assign pass_rate_field = "pass_rate_2" %}
|
||||||
|
{% assign highlight_model = "+" %}
|
||||||
|
{% assign show_legend = false %}
|
||||||
|
{% include leaderboard.js %}
|
||||||
|
</script>
|
||||||
|
<style>
|
||||||
|
tr.selected {
|
||||||
|
color: #0056b3;
|
||||||
|
}
|
||||||
|
table {
|
||||||
|
table-layout: fixed;
|
||||||
|
}
|
||||||
|
td, th {
|
||||||
|
word-wrap: break-word;
|
||||||
|
overflow-wrap: break-word;
|
||||||
|
}
|
||||||
|
td:nth-child(3), td:nth-child(4) {
|
||||||
|
font-size: 12px;
|
||||||
|
}
|
||||||
|
</style>
|
||||||
257
aider/website/_posts/2025-01-28-deepseek-down.md
Normal file
257
aider/website/_posts/2025-01-28-deepseek-down.md
Normal file
@@ -0,0 +1,257 @@
|
|||||||
|
---
|
||||||
|
title: Alternative DeepSeek V3 providers
|
||||||
|
excerpt: DeepSeek's API has been experiencing reliability issues. Here are alternative providers you can use.
|
||||||
|
#highlight_image: /assets/deepseek-down.jpg
|
||||||
|
draft: false
|
||||||
|
nav_exclude: true
|
||||||
|
---
|
||||||
|
{% if page.date %}
|
||||||
|
<p class="post-date">{{ page.date | date: "%B %d, %Y" }}</p>
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
# Alternative DeepSeek V3 providers
|
||||||
|
{: .no_toc }
|
||||||
|
|
||||||
|
<canvas id="editChart" width="800" height="450" style="margin-top: 20px"></canvas>
|
||||||
|
|
||||||
|
DeepSeek's API has been experiencing significant reliability issues for the past 24-48+ hours, with many users reporting downtime and overload problems.
|
||||||
|
Their [status page](https://status.deepseek.com) notes an ongoing incident.
|
||||||
|
|
||||||
|
If you're affected by these issues, several alternative providers offer access to DeepSeek V3. This article compares their performance on aider's polyglot benchmark to help you choose a reliable alternative.
|
||||||
|
|
||||||
|
## Providers
|
||||||
|
{: .no_toc }
|
||||||
|
|
||||||
|
* TOC
|
||||||
|
{:toc}
|
||||||
|
|
||||||
|
## OpenRouter
|
||||||
|
|
||||||
|
[OpenRouter offers many DeepSeek providers](https://openrouter.ai/deepseek/deepseek-chat/providers)
|
||||||
|
through their unified API.
|
||||||
|
You can use aider with OpenRouter like this:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Set your API key using environment variables
|
||||||
|
export OPENROUTER_API_KEY=<your-key>
|
||||||
|
aider --model openrouter/deepseek/deepseek-chat
|
||||||
|
|
||||||
|
# Or use the --api-key command line option
|
||||||
|
aider --model openrouter/deepseek/deepseek-chat --api-key openrouter=<your-key>
|
||||||
|
|
||||||
|
# Or add it to .aider.conf.yml in your home directory or project root:
|
||||||
|
api-key:
|
||||||
|
- openrouter=<your-key>
|
||||||
|
```
|
||||||
|
|
||||||
|
OpenRouter automatically monitors their providers and routes requests to stable
|
||||||
|
APIs and away from those experiencing unreliable performance.
|
||||||
|
|
||||||
|
But not all providers serve the same version of open source models, and not
|
||||||
|
all have the same privacy guarantees.
|
||||||
|
You can control which OpenRouter providers are used to serve the model via
|
||||||
|
[aider's model settings](https://aider.chat/docs/config/adv-model-settings.html#model-settings).
|
||||||
|
Create a `.aider.model.settings.yml` file in your home directory or git project root with settings like this:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
- name: openrouter/deepseek/deepseek-chat
|
||||||
|
extra_params:
|
||||||
|
extra_body:
|
||||||
|
provider:
|
||||||
|
# Only use these providers, in this order
|
||||||
|
order: ["Novita"]
|
||||||
|
# Don't fall back to other providers
|
||||||
|
allow_fallbacks: false
|
||||||
|
```
|
||||||
|
|
||||||
|
See [OpenRouter's provider routing docs](https://openrouter.ai/docs/provider-routing) for more details.
|
||||||
|
|
||||||
|
|
||||||
|
## Fireworks
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Set your API key using environment variables
|
||||||
|
export FIREWORKS_API_KEY=<your-key>
|
||||||
|
aider --model fireworks_ai/accounts/fireworks/models/deepseek-chat
|
||||||
|
|
||||||
|
# Or use the --api-key command line option
|
||||||
|
aider --model fireworks_ai/accounts/fireworks/models/deepseek-chat --api-key fireworks=<your-key>
|
||||||
|
|
||||||
|
# Or add it to .aider.conf.yml in your home directory or project root:
|
||||||
|
api-key:
|
||||||
|
- fireworks=<your-key>
|
||||||
|
```
|
||||||
|
|
||||||
|
Create a `.aider.model.settings.yml` file in your home directory or git project root with settings like this:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
- name: fireworks_ai/accounts/fireworks/models/deepseek-chat
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: null
|
||||||
|
use_repo_map: true
|
||||||
|
send_undo_reply: false
|
||||||
|
lazy: false
|
||||||
|
reminder: sys
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 8192
|
||||||
|
cache_control: false
|
||||||
|
caches_by_default: true
|
||||||
|
use_system_prompt: true
|
||||||
|
use_temperature: true
|
||||||
|
streaming: true
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Hyperbolic
|
||||||
|
|
||||||
|
You can use [Hyperbolic's API](https://hyperbolic.xyz) as an OpenAI-compatible provider:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Set your API key using environment variables
|
||||||
|
export OPENAI_API_BASE=https://api.hyperbolic.xyz/v1/
|
||||||
|
export OPENAI_API_KEY=<your-key>
|
||||||
|
aider --model openai/deepseek-ai/DeepSeek-V3
|
||||||
|
|
||||||
|
# Or use the --api-key command line option
|
||||||
|
aider --model openai/deepseek-ai/DeepSeek-V3 --api-key openai=<your-key>
|
||||||
|
|
||||||
|
# Or add it to .aider.conf.yml in your home directory or project root:
|
||||||
|
api-key:
|
||||||
|
- openai=<your-key>
|
||||||
|
```
|
||||||
|
|
||||||
|
Create a `.aider.model.settings.yml` file in your home directory or git project root with settings like this:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
- name: openai/deepseek-ai/DeepSeek-V3
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: null
|
||||||
|
use_repo_map: true
|
||||||
|
send_undo_reply: false
|
||||||
|
lazy: false
|
||||||
|
reminder: sys
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
cache_control: false
|
||||||
|
caches_by_default: true
|
||||||
|
use_system_prompt: true
|
||||||
|
use_temperature: true
|
||||||
|
streaming: true
|
||||||
|
editor_model_name: null
|
||||||
|
editor_edit_format: null
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 65536
|
||||||
|
```
|
||||||
|
|
||||||
|
## Ollama
|
||||||
|
|
||||||
|
You can run [DeepSeek V3 via Ollama](https://ollama.com/library/deepseek-v3).
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Pull the model
|
||||||
|
ollama pull deepseek-v3
|
||||||
|
|
||||||
|
# Start your ollama server
|
||||||
|
ollama serve
|
||||||
|
|
||||||
|
# In another terminal window...
|
||||||
|
export OLLAMA_API_BASE=http://127.0.0.1:11434 # Mac/Linux
|
||||||
|
setx OLLAMA_API_BASE http://127.0.0.1:11434 # Windows, restart shell after setx
|
||||||
|
|
||||||
|
aider --model ollama/deepseek-v3
|
||||||
|
```
|
||||||
|
|
||||||
|
It's important to provide model settings, especially the `num_ctx` parameter to
|
||||||
|
set the context window.
|
||||||
|
Ollama uses a 2k context window by default, which is very small for working with aider.
|
||||||
|
Larger context windows will allow you to work with larger amounts of code,
|
||||||
|
but will use memory and increase latency.
|
||||||
|
|
||||||
|
Unlike most other LLM servers, Ollama does not throw an error if you submit a request that exceeds the context window. Instead, it just silently truncates the request by discarding the “oldest” messages in the chat to make it fit within the context window.
|
||||||
|
|
||||||
|
So if your context window is too small, you won’t get an explicit error. The biggest symptom will be that aider says it can’t see (some of) the files you added to the chat. That’s because ollama is silently discarding them because they exceed the context window.
|
||||||
|
|
||||||
|
Create a `.aider.model.settings.yml` file in your home directory or git project root with settings like this:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
- name: ollama/deepseek-v3
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: null
|
||||||
|
use_repo_map: true
|
||||||
|
send_undo_reply: false
|
||||||
|
lazy: false
|
||||||
|
reminder: sys
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
cache_control: false
|
||||||
|
caches_by_default: true
|
||||||
|
use_system_prompt: true
|
||||||
|
use_temperature: true
|
||||||
|
streaming: true
|
||||||
|
extra_params:
|
||||||
|
num_ctx: 8192 # How large a context window?
|
||||||
|
```
|
||||||
|
|
||||||
|
## Other providers
|
||||||
|
|
||||||
|
You will need to properly configure aider to work with DeepSeek V3 when served
|
||||||
|
via other providers:
|
||||||
|
|
||||||
|
- Determine the `--model` name to use.
|
||||||
|
- Provide your API key to aider.
|
||||||
|
- Add model settings to `.aider.model.settings.yml`.
|
||||||
|
|
||||||
|
|
||||||
|
Adapt the `.aider.model.settings.yml` shown above for Fireworks. You will need to change the `name` field to match you chosen provider's model naming scheme.
|
||||||
|
|
||||||
|
See [Advanced model settings](https://aider.chat/docs/config/adv-model-settings.html#model-settings) for details about all aider model settings
|
||||||
|
|
||||||
|
## Results
|
||||||
|
|
||||||
|
|
||||||
|
<table style="width: 100%; max-width: 800px; margin: auto; border-collapse: collapse; box-shadow: 0 2px 4px rgba(0,0,0,0.1); font-size: 14px;">
|
||||||
|
<thead style="background-color: #f2f2f2;">
|
||||||
|
<tr>
|
||||||
|
<th style="padding: 8px; text-align: left;">Model</th>
|
||||||
|
<th style="padding: 8px; text-align: center;">Percent completed correctly</th>
|
||||||
|
<th style="padding: 8px; text-align: center;">Percent using correct edit format</th>
|
||||||
|
<th style="padding: 8px; text-align: left;">Command</th>
|
||||||
|
<th style="padding: 8px; text-align: center;">Edit format</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
{% assign edit_sorted = site.data.deepseek-down | sort: 'pass_rate_2' | reverse %}
|
||||||
|
{% for row in edit_sorted %}
|
||||||
|
<tr style="border-bottom: 1px solid #ddd;">
|
||||||
|
<td style="padding: 8px;">{{ row.model }}</td>
|
||||||
|
<td style="padding: 8px; text-align: center;">{{ row.pass_rate_2 }}%</td>
|
||||||
|
<td style="padding: 8px; text-align: center;">{{ row.percent_cases_well_formed }}%</td>
|
||||||
|
<td style="padding: 8px;"><code>{{ row.command }}</code></td>
|
||||||
|
<td style="padding: 8px; text-align: center;">{{ row.edit_format }}</td>
|
||||||
|
</tr>
|
||||||
|
{% endfor %}
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
<script src="https://unpkg.com/patternomaly/dist/patternomaly.js"></script>
|
||||||
|
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
|
||||||
|
<script>
|
||||||
|
{% assign data_source = edit_sorted %}
|
||||||
|
{% assign pass_rate_field = "pass_rate_2" %}
|
||||||
|
{% assign highlight_model = "DeepSeek" %}
|
||||||
|
{% include leaderboard.js %}
|
||||||
|
</script>
|
||||||
|
<style>
|
||||||
|
tr.selected {
|
||||||
|
color: #0056b3;
|
||||||
|
}
|
||||||
|
table {
|
||||||
|
table-layout: fixed;
|
||||||
|
}
|
||||||
|
td, th {
|
||||||
|
word-wrap: break-word;
|
||||||
|
overflow-wrap: break-word;
|
||||||
|
}
|
||||||
|
td:nth-child(3), td:nth-child(4) {
|
||||||
|
font-size: 12px;
|
||||||
|
}
|
||||||
|
</style>
|
||||||
BIN
aider/website/assets/r1-sonnet-sota.jpg
Normal file
BIN
aider/website/assets/r1-sonnet-sota.jpg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 124 KiB |
File diff suppressed because it is too large
Load Diff
@@ -20,39 +20,6 @@
|
|||||||
## Specify the model to use for the main chat
|
## Specify the model to use for the main chat
|
||||||
#model: xxx
|
#model: xxx
|
||||||
|
|
||||||
## Use claude-3-opus-20240229 model for the main chat
|
|
||||||
#opus: false
|
|
||||||
|
|
||||||
## Use claude-3-5-sonnet-20241022 model for the main chat
|
|
||||||
#sonnet: false
|
|
||||||
|
|
||||||
## Use claude-3-5-haiku-20241022 model for the main chat
|
|
||||||
#haiku: false
|
|
||||||
|
|
||||||
## Use gpt-4-0613 model for the main chat
|
|
||||||
#4: false
|
|
||||||
|
|
||||||
## Use gpt-4o model for the main chat
|
|
||||||
#4o: false
|
|
||||||
|
|
||||||
## Use gpt-4o-mini model for the main chat
|
|
||||||
#mini: false
|
|
||||||
|
|
||||||
## Use gpt-4-1106-preview model for the main chat
|
|
||||||
#4-turbo: false
|
|
||||||
|
|
||||||
## Use gpt-3.5-turbo model for the main chat
|
|
||||||
#35turbo: false
|
|
||||||
|
|
||||||
## Use deepseek/deepseek-chat model for the main chat
|
|
||||||
#deepseek: false
|
|
||||||
|
|
||||||
## Use o1-mini model for the main chat
|
|
||||||
#o1-mini: false
|
|
||||||
|
|
||||||
## Use o1-preview model for the main chat
|
|
||||||
#o1-preview: false
|
|
||||||
|
|
||||||
########################
|
########################
|
||||||
# API Keys and settings:
|
# API Keys and settings:
|
||||||
|
|
||||||
@@ -113,6 +80,12 @@
|
|||||||
# - yyy
|
# - yyy
|
||||||
# - zzz
|
# - zzz
|
||||||
|
|
||||||
|
## Set the reasoning_effort API parameter (default: not set)
|
||||||
|
#reasoning-effort: xxx
|
||||||
|
|
||||||
|
## Set the thinking token budget for models that support it (default: not set)
|
||||||
|
#thinking-tokens: xxx
|
||||||
|
|
||||||
## Verify the SSL cert when connecting to models (default: True)
|
## Verify the SSL cert when connecting to models (default: True)
|
||||||
#verify-ssl: true
|
#verify-ssl: true
|
||||||
|
|
||||||
@@ -195,7 +168,7 @@
|
|||||||
#user-input-color: #00cc00
|
#user-input-color: #00cc00
|
||||||
|
|
||||||
## Set the color for tool output (default: None)
|
## Set the color for tool output (default: None)
|
||||||
#tool-output-color: xxx
|
#tool-output-color: "xxx"
|
||||||
|
|
||||||
## Set the color for tool error messages (default: #FF2222)
|
## Set the color for tool error messages (default: #FF2222)
|
||||||
#tool-error-color: #FF2222
|
#tool-error-color: #FF2222
|
||||||
@@ -207,16 +180,16 @@
|
|||||||
#assistant-output-color: #0088ff
|
#assistant-output-color: #0088ff
|
||||||
|
|
||||||
## Set the color for the completion menu (default: terminal's default text color)
|
## Set the color for the completion menu (default: terminal's default text color)
|
||||||
#completion-menu-color: xxx
|
#completion-menu-color: "xxx"
|
||||||
|
|
||||||
## Set the background color for the completion menu (default: terminal's default background color)
|
## Set the background color for the completion menu (default: terminal's default background color)
|
||||||
#completion-menu-bg-color: xxx
|
#completion-menu-bg-color: "xxx"
|
||||||
|
|
||||||
## Set the color for the current item in the completion menu (default: terminal's default background color)
|
## Set the color for the current item in the completion menu (default: terminal's default background color)
|
||||||
#completion-menu-current-color: xxx
|
#completion-menu-current-color: "xxx"
|
||||||
|
|
||||||
## Set the background color for the current item in the completion menu (default: terminal's default text color)
|
## Set the background color for the current item in the completion menu (default: terminal's default text color)
|
||||||
#completion-menu-current-bg-color: xxx
|
#completion-menu-current-bg-color: "xxx"
|
||||||
|
|
||||||
## Set the markdown code theme (default: default, other options include monokai, solarized-dark, solarized-light, or a Pygments builtin style, see https://pygments.org/styles for available themes)
|
## Set the markdown code theme (default: default, other options include monokai, solarized-dark, solarized-light, or a Pygments builtin style, see https://pygments.org/styles for available themes)
|
||||||
#code-theme: default
|
#code-theme: default
|
||||||
@@ -410,6 +383,9 @@
|
|||||||
## Specify the encoding for input and output (default: utf-8)
|
## Specify the encoding for input and output (default: utf-8)
|
||||||
#encoding: utf-8
|
#encoding: utf-8
|
||||||
|
|
||||||
|
## Line endings to use when writing files (default: platform)
|
||||||
|
#line-endings: platform
|
||||||
|
|
||||||
## Specify the config file (default: search for .aider.conf.yml in git root, cwd or home directory)
|
## Specify the config file (default: search for .aider.conf.yml in git root, cwd or home directory)
|
||||||
#config: xxx
|
#config: xxx
|
||||||
|
|
||||||
@@ -425,8 +401,50 @@
|
|||||||
## Enable/disable multi-line input mode with Meta-Enter to submit (default: False)
|
## Enable/disable multi-line input mode with Meta-Enter to submit (default: False)
|
||||||
#multiline: false
|
#multiline: false
|
||||||
|
|
||||||
|
## Enable/disable terminal bell notifications when LLM responses are ready (default: False)
|
||||||
|
#notifications: false
|
||||||
|
|
||||||
|
## Specify a command to run for notifications instead of the terminal bell. If not specified, a default command for your OS may be used.
|
||||||
|
#notifications-command: xxx
|
||||||
|
|
||||||
## Enable/disable detection and offering to add URLs to chat (default: True)
|
## Enable/disable detection and offering to add URLs to chat (default: True)
|
||||||
#detect-urls: true
|
#detect-urls: true
|
||||||
|
|
||||||
## Specify which editor to use for the /editor command
|
## Specify which editor to use for the /editor command
|
||||||
#editor: xxx
|
#editor: xxx
|
||||||
|
|
||||||
|
############################
|
||||||
|
# Deprecated model settings:
|
||||||
|
|
||||||
|
## Use claude-3-opus-20240229 model for the main chat (deprecated, use --model)
|
||||||
|
#opus: false
|
||||||
|
|
||||||
|
## Use anthropic/claude-3-7-sonnet-20250219 model for the main chat (deprecated, use --model)
|
||||||
|
#sonnet: false
|
||||||
|
|
||||||
|
## Use claude-3-5-haiku-20241022 model for the main chat (deprecated, use --model)
|
||||||
|
#haiku: false
|
||||||
|
|
||||||
|
## Use gpt-4-0613 model for the main chat (deprecated, use --model)
|
||||||
|
#4: false
|
||||||
|
|
||||||
|
## Use gpt-4o model for the main chat (deprecated, use --model)
|
||||||
|
#4o: false
|
||||||
|
|
||||||
|
## Use gpt-4o-mini model for the main chat (deprecated, use --model)
|
||||||
|
#mini: false
|
||||||
|
|
||||||
|
## Use gpt-4-1106-preview model for the main chat (deprecated, use --model)
|
||||||
|
#4-turbo: false
|
||||||
|
|
||||||
|
## Use gpt-3.5-turbo model for the main chat (deprecated, use --model)
|
||||||
|
#35turbo: false
|
||||||
|
|
||||||
|
## Use deepseek/deepseek-chat model for the main chat (deprecated, use --model)
|
||||||
|
#deepseek: false
|
||||||
|
|
||||||
|
## Use o1-mini model for the main chat (deprecated, use --model)
|
||||||
|
#o1-mini: false
|
||||||
|
|
||||||
|
## Use o1-preview model for the main chat (deprecated, use --model)
|
||||||
|
#o1-preview: false
|
||||||
|
|||||||
@@ -24,39 +24,6 @@
|
|||||||
## Specify the model to use for the main chat
|
## Specify the model to use for the main chat
|
||||||
#AIDER_MODEL=
|
#AIDER_MODEL=
|
||||||
|
|
||||||
## Use claude-3-opus-20240229 model for the main chat
|
|
||||||
#AIDER_OPUS=
|
|
||||||
|
|
||||||
## Use claude-3-5-sonnet-20241022 model for the main chat
|
|
||||||
#AIDER_SONNET=
|
|
||||||
|
|
||||||
## Use claude-3-5-haiku-20241022 model for the main chat
|
|
||||||
#AIDER_HAIKU=
|
|
||||||
|
|
||||||
## Use gpt-4-0613 model for the main chat
|
|
||||||
#AIDER_4=
|
|
||||||
|
|
||||||
## Use gpt-4o model for the main chat
|
|
||||||
#AIDER_4O=
|
|
||||||
|
|
||||||
## Use gpt-4o-mini model for the main chat
|
|
||||||
#AIDER_MINI=
|
|
||||||
|
|
||||||
## Use gpt-4-1106-preview model for the main chat
|
|
||||||
#AIDER_4_TURBO=
|
|
||||||
|
|
||||||
## Use gpt-3.5-turbo model for the main chat
|
|
||||||
#AIDER_35TURBO=
|
|
||||||
|
|
||||||
## Use deepseek/deepseek-chat model for the main chat
|
|
||||||
#AIDER_DEEPSEEK=
|
|
||||||
|
|
||||||
## Use o1-mini model for the main chat
|
|
||||||
#AIDER_O1_MINI=
|
|
||||||
|
|
||||||
## Use o1-preview model for the main chat
|
|
||||||
#AIDER_O1_PREVIEW=
|
|
||||||
|
|
||||||
########################
|
########################
|
||||||
# API Keys and settings:
|
# API Keys and settings:
|
||||||
|
|
||||||
@@ -102,6 +69,12 @@
|
|||||||
## Add a model alias (can be used multiple times)
|
## Add a model alias (can be used multiple times)
|
||||||
#AIDER_ALIAS=
|
#AIDER_ALIAS=
|
||||||
|
|
||||||
|
## Set the reasoning_effort API parameter (default: not set)
|
||||||
|
#AIDER_REASONING_EFFORT=
|
||||||
|
|
||||||
|
## Set the thinking token budget for models that support it (default: not set)
|
||||||
|
#AIDER_THINKING_TOKENS=
|
||||||
|
|
||||||
## Verify the SSL cert when connecting to models (default: True)
|
## Verify the SSL cert when connecting to models (default: True)
|
||||||
#AIDER_VERIFY_SSL=true
|
#AIDER_VERIFY_SSL=true
|
||||||
|
|
||||||
@@ -381,6 +354,9 @@
|
|||||||
## Specify the encoding for input and output (default: utf-8)
|
## Specify the encoding for input and output (default: utf-8)
|
||||||
#AIDER_ENCODING=utf-8
|
#AIDER_ENCODING=utf-8
|
||||||
|
|
||||||
|
## Line endings to use when writing files (default: platform)
|
||||||
|
#AIDER_LINE_ENDINGS=platform
|
||||||
|
|
||||||
## Specify the .env file to load (default: .env in git root)
|
## Specify the .env file to load (default: .env in git root)
|
||||||
#AIDER_ENV_FILE=.env
|
#AIDER_ENV_FILE=.env
|
||||||
|
|
||||||
@@ -393,8 +369,50 @@
|
|||||||
## Enable/disable multi-line input mode with Meta-Enter to submit (default: False)
|
## Enable/disable multi-line input mode with Meta-Enter to submit (default: False)
|
||||||
#AIDER_MULTILINE=false
|
#AIDER_MULTILINE=false
|
||||||
|
|
||||||
|
## Enable/disable terminal bell notifications when LLM responses are ready (default: False)
|
||||||
|
#AIDER_NOTIFICATIONS=false
|
||||||
|
|
||||||
|
## Specify a command to run for notifications instead of the terminal bell. If not specified, a default command for your OS may be used.
|
||||||
|
#AIDER_NOTIFICATIONS_COMMAND=
|
||||||
|
|
||||||
## Enable/disable detection and offering to add URLs to chat (default: True)
|
## Enable/disable detection and offering to add URLs to chat (default: True)
|
||||||
#AIDER_DETECT_URLS=true
|
#AIDER_DETECT_URLS=true
|
||||||
|
|
||||||
## Specify which editor to use for the /editor command
|
## Specify which editor to use for the /editor command
|
||||||
#AIDER_EDITOR=
|
#AIDER_EDITOR=
|
||||||
|
|
||||||
|
############################
|
||||||
|
# Deprecated model settings:
|
||||||
|
|
||||||
|
## Use claude-3-opus-20240229 model for the main chat (deprecated, use --model)
|
||||||
|
#AIDER_OPUS=false
|
||||||
|
|
||||||
|
## Use anthropic/claude-3-7-sonnet-20250219 model for the main chat (deprecated, use --model)
|
||||||
|
#AIDER_SONNET=false
|
||||||
|
|
||||||
|
## Use claude-3-5-haiku-20241022 model for the main chat (deprecated, use --model)
|
||||||
|
#AIDER_HAIKU=false
|
||||||
|
|
||||||
|
## Use gpt-4-0613 model for the main chat (deprecated, use --model)
|
||||||
|
#AIDER_4=false
|
||||||
|
|
||||||
|
## Use gpt-4o model for the main chat (deprecated, use --model)
|
||||||
|
#AIDER_4O=false
|
||||||
|
|
||||||
|
## Use gpt-4o-mini model for the main chat (deprecated, use --model)
|
||||||
|
#AIDER_MINI=false
|
||||||
|
|
||||||
|
## Use gpt-4-1106-preview model for the main chat (deprecated, use --model)
|
||||||
|
#AIDER_4_TURBO=false
|
||||||
|
|
||||||
|
## Use gpt-3.5-turbo model for the main chat (deprecated, use --model)
|
||||||
|
#AIDER_35TURBO=false
|
||||||
|
|
||||||
|
## Use deepseek/deepseek-chat model for the main chat (deprecated, use --model)
|
||||||
|
#AIDER_DEEPSEEK=false
|
||||||
|
|
||||||
|
## Use o1-mini model for the main chat (deprecated, use --model)
|
||||||
|
#AIDER_O1_MINI=false
|
||||||
|
|
||||||
|
## Use o1-preview model for the main chat (deprecated, use --model)
|
||||||
|
#AIDER_O1_PREVIEW=false
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -7,13 +7,15 @@ description: How to configure aider with a yaml config file.
|
|||||||
# YAML config file
|
# YAML config file
|
||||||
|
|
||||||
Most of aider's options can be set in an `.aider.conf.yml` file.
|
Most of aider's options can be set in an `.aider.conf.yml` file.
|
||||||
Aider will look for a this file in these locations and
|
Aider will look for a this file in these locations:
|
||||||
load whichever is found first.
|
|
||||||
|
|
||||||
- As specified with the `--config <filename>` parameter.
|
|
||||||
- The current directory.
|
|
||||||
- The root of your git repo.
|
|
||||||
- Your home directory.
|
- Your home directory.
|
||||||
|
- The root of your git repo.
|
||||||
|
- The current directory.
|
||||||
|
|
||||||
|
If the files above exist, they will be loaded in that order. Files loaded last will take priority.
|
||||||
|
|
||||||
|
You can also specify the `--config <filename>` parameter, which will only load the one config file.
|
||||||
|
|
||||||
{% include keys.md %}
|
{% include keys.md %}
|
||||||
|
|
||||||
@@ -72,39 +74,6 @@ cog.outl("```")
|
|||||||
## Specify the model to use for the main chat
|
## Specify the model to use for the main chat
|
||||||
#model: xxx
|
#model: xxx
|
||||||
|
|
||||||
## Use claude-3-opus-20240229 model for the main chat
|
|
||||||
#opus: false
|
|
||||||
|
|
||||||
## Use claude-3-5-sonnet-20241022 model for the main chat
|
|
||||||
#sonnet: false
|
|
||||||
|
|
||||||
## Use claude-3-5-haiku-20241022 model for the main chat
|
|
||||||
#haiku: false
|
|
||||||
|
|
||||||
## Use gpt-4-0613 model for the main chat
|
|
||||||
#4: false
|
|
||||||
|
|
||||||
## Use gpt-4o model for the main chat
|
|
||||||
#4o: false
|
|
||||||
|
|
||||||
## Use gpt-4o-mini model for the main chat
|
|
||||||
#mini: false
|
|
||||||
|
|
||||||
## Use gpt-4-1106-preview model for the main chat
|
|
||||||
#4-turbo: false
|
|
||||||
|
|
||||||
## Use gpt-3.5-turbo model for the main chat
|
|
||||||
#35turbo: false
|
|
||||||
|
|
||||||
## Use deepseek/deepseek-chat model for the main chat
|
|
||||||
#deepseek: false
|
|
||||||
|
|
||||||
## Use o1-mini model for the main chat
|
|
||||||
#o1-mini: false
|
|
||||||
|
|
||||||
## Use o1-preview model for the main chat
|
|
||||||
#o1-preview: false
|
|
||||||
|
|
||||||
########################
|
########################
|
||||||
# API Keys and settings:
|
# API Keys and settings:
|
||||||
|
|
||||||
@@ -165,6 +134,12 @@ cog.outl("```")
|
|||||||
# - yyy
|
# - yyy
|
||||||
# - zzz
|
# - zzz
|
||||||
|
|
||||||
|
## Set the reasoning_effort API parameter (default: not set)
|
||||||
|
#reasoning-effort: xxx
|
||||||
|
|
||||||
|
## Set the thinking token budget for models that support it (default: not set)
|
||||||
|
#thinking-tokens: xxx
|
||||||
|
|
||||||
## Verify the SSL cert when connecting to models (default: True)
|
## Verify the SSL cert when connecting to models (default: True)
|
||||||
#verify-ssl: true
|
#verify-ssl: true
|
||||||
|
|
||||||
@@ -247,7 +222,7 @@ cog.outl("```")
|
|||||||
#user-input-color: #00cc00
|
#user-input-color: #00cc00
|
||||||
|
|
||||||
## Set the color for tool output (default: None)
|
## Set the color for tool output (default: None)
|
||||||
#tool-output-color: xxx
|
#tool-output-color: "xxx"
|
||||||
|
|
||||||
## Set the color for tool error messages (default: #FF2222)
|
## Set the color for tool error messages (default: #FF2222)
|
||||||
#tool-error-color: #FF2222
|
#tool-error-color: #FF2222
|
||||||
@@ -259,16 +234,16 @@ cog.outl("```")
|
|||||||
#assistant-output-color: #0088ff
|
#assistant-output-color: #0088ff
|
||||||
|
|
||||||
## Set the color for the completion menu (default: terminal's default text color)
|
## Set the color for the completion menu (default: terminal's default text color)
|
||||||
#completion-menu-color: xxx
|
#completion-menu-color: "xxx"
|
||||||
|
|
||||||
## Set the background color for the completion menu (default: terminal's default background color)
|
## Set the background color for the completion menu (default: terminal's default background color)
|
||||||
#completion-menu-bg-color: xxx
|
#completion-menu-bg-color: "xxx"
|
||||||
|
|
||||||
## Set the color for the current item in the completion menu (default: terminal's default background color)
|
## Set the color for the current item in the completion menu (default: terminal's default background color)
|
||||||
#completion-menu-current-color: xxx
|
#completion-menu-current-color: "xxx"
|
||||||
|
|
||||||
## Set the background color for the current item in the completion menu (default: terminal's default text color)
|
## Set the background color for the current item in the completion menu (default: terminal's default text color)
|
||||||
#completion-menu-current-bg-color: xxx
|
#completion-menu-current-bg-color: "xxx"
|
||||||
|
|
||||||
## Set the markdown code theme (default: default, other options include monokai, solarized-dark, solarized-light, or a Pygments builtin style, see https://pygments.org/styles for available themes)
|
## Set the markdown code theme (default: default, other options include monokai, solarized-dark, solarized-light, or a Pygments builtin style, see https://pygments.org/styles for available themes)
|
||||||
#code-theme: default
|
#code-theme: default
|
||||||
@@ -462,6 +437,9 @@ cog.outl("```")
|
|||||||
## Specify the encoding for input and output (default: utf-8)
|
## Specify the encoding for input and output (default: utf-8)
|
||||||
#encoding: utf-8
|
#encoding: utf-8
|
||||||
|
|
||||||
|
## Line endings to use when writing files (default: platform)
|
||||||
|
#line-endings: platform
|
||||||
|
|
||||||
## Specify the config file (default: search for .aider.conf.yml in git root, cwd or home directory)
|
## Specify the config file (default: search for .aider.conf.yml in git root, cwd or home directory)
|
||||||
#config: xxx
|
#config: xxx
|
||||||
|
|
||||||
@@ -477,10 +455,52 @@ cog.outl("```")
|
|||||||
## Enable/disable multi-line input mode with Meta-Enter to submit (default: False)
|
## Enable/disable multi-line input mode with Meta-Enter to submit (default: False)
|
||||||
#multiline: false
|
#multiline: false
|
||||||
|
|
||||||
|
## Enable/disable terminal bell notifications when LLM responses are ready (default: False)
|
||||||
|
#notifications: false
|
||||||
|
|
||||||
|
## Specify a command to run for notifications instead of the terminal bell. If not specified, a default command for your OS may be used.
|
||||||
|
#notifications-command: xxx
|
||||||
|
|
||||||
## Enable/disable detection and offering to add URLs to chat (default: True)
|
## Enable/disable detection and offering to add URLs to chat (default: True)
|
||||||
#detect-urls: true
|
#detect-urls: true
|
||||||
|
|
||||||
## Specify which editor to use for the /editor command
|
## Specify which editor to use for the /editor command
|
||||||
#editor: xxx
|
#editor: xxx
|
||||||
|
|
||||||
|
############################
|
||||||
|
# Deprecated model settings:
|
||||||
|
|
||||||
|
## Use claude-3-opus-20240229 model for the main chat (deprecated, use --model)
|
||||||
|
#opus: false
|
||||||
|
|
||||||
|
## Use anthropic/claude-3-7-sonnet-20250219 model for the main chat (deprecated, use --model)
|
||||||
|
#sonnet: false
|
||||||
|
|
||||||
|
## Use claude-3-5-haiku-20241022 model for the main chat (deprecated, use --model)
|
||||||
|
#haiku: false
|
||||||
|
|
||||||
|
## Use gpt-4-0613 model for the main chat (deprecated, use --model)
|
||||||
|
#4: false
|
||||||
|
|
||||||
|
## Use gpt-4o model for the main chat (deprecated, use --model)
|
||||||
|
#4o: false
|
||||||
|
|
||||||
|
## Use gpt-4o-mini model for the main chat (deprecated, use --model)
|
||||||
|
#mini: false
|
||||||
|
|
||||||
|
## Use gpt-4-1106-preview model for the main chat (deprecated, use --model)
|
||||||
|
#4-turbo: false
|
||||||
|
|
||||||
|
## Use gpt-3.5-turbo model for the main chat (deprecated, use --model)
|
||||||
|
#35turbo: false
|
||||||
|
|
||||||
|
## Use deepseek/deepseek-chat model for the main chat (deprecated, use --model)
|
||||||
|
#deepseek: false
|
||||||
|
|
||||||
|
## Use o1-mini model for the main chat (deprecated, use --model)
|
||||||
|
#o1-mini: false
|
||||||
|
|
||||||
|
## Use o1-preview model for the main chat (deprecated, use --model)
|
||||||
|
#o1-preview: false
|
||||||
```
|
```
|
||||||
<!--[[[end]]]-->
|
<!--[[[end]]]-->
|
||||||
|
|||||||
@@ -64,39 +64,6 @@ cog.outl("```")
|
|||||||
## Specify the model to use for the main chat
|
## Specify the model to use for the main chat
|
||||||
#AIDER_MODEL=
|
#AIDER_MODEL=
|
||||||
|
|
||||||
## Use claude-3-opus-20240229 model for the main chat
|
|
||||||
#AIDER_OPUS=
|
|
||||||
|
|
||||||
## Use claude-3-5-sonnet-20241022 model for the main chat
|
|
||||||
#AIDER_SONNET=
|
|
||||||
|
|
||||||
## Use claude-3-5-haiku-20241022 model for the main chat
|
|
||||||
#AIDER_HAIKU=
|
|
||||||
|
|
||||||
## Use gpt-4-0613 model for the main chat
|
|
||||||
#AIDER_4=
|
|
||||||
|
|
||||||
## Use gpt-4o model for the main chat
|
|
||||||
#AIDER_4O=
|
|
||||||
|
|
||||||
## Use gpt-4o-mini model for the main chat
|
|
||||||
#AIDER_MINI=
|
|
||||||
|
|
||||||
## Use gpt-4-1106-preview model for the main chat
|
|
||||||
#AIDER_4_TURBO=
|
|
||||||
|
|
||||||
## Use gpt-3.5-turbo model for the main chat
|
|
||||||
#AIDER_35TURBO=
|
|
||||||
|
|
||||||
## Use deepseek/deepseek-chat model for the main chat
|
|
||||||
#AIDER_DEEPSEEK=
|
|
||||||
|
|
||||||
## Use o1-mini model for the main chat
|
|
||||||
#AIDER_O1_MINI=
|
|
||||||
|
|
||||||
## Use o1-preview model for the main chat
|
|
||||||
#AIDER_O1_PREVIEW=
|
|
||||||
|
|
||||||
########################
|
########################
|
||||||
# API Keys and settings:
|
# API Keys and settings:
|
||||||
|
|
||||||
@@ -142,6 +109,12 @@ cog.outl("```")
|
|||||||
## Add a model alias (can be used multiple times)
|
## Add a model alias (can be used multiple times)
|
||||||
#AIDER_ALIAS=
|
#AIDER_ALIAS=
|
||||||
|
|
||||||
|
## Set the reasoning_effort API parameter (default: not set)
|
||||||
|
#AIDER_REASONING_EFFORT=
|
||||||
|
|
||||||
|
## Set the thinking token budget for models that support it (default: not set)
|
||||||
|
#AIDER_THINKING_TOKENS=
|
||||||
|
|
||||||
## Verify the SSL cert when connecting to models (default: True)
|
## Verify the SSL cert when connecting to models (default: True)
|
||||||
#AIDER_VERIFY_SSL=true
|
#AIDER_VERIFY_SSL=true
|
||||||
|
|
||||||
@@ -421,6 +394,9 @@ cog.outl("```")
|
|||||||
## Specify the encoding for input and output (default: utf-8)
|
## Specify the encoding for input and output (default: utf-8)
|
||||||
#AIDER_ENCODING=utf-8
|
#AIDER_ENCODING=utf-8
|
||||||
|
|
||||||
|
## Line endings to use when writing files (default: platform)
|
||||||
|
#AIDER_LINE_ENDINGS=platform
|
||||||
|
|
||||||
## Specify the .env file to load (default: .env in git root)
|
## Specify the .env file to load (default: .env in git root)
|
||||||
#AIDER_ENV_FILE=.env
|
#AIDER_ENV_FILE=.env
|
||||||
|
|
||||||
@@ -433,10 +409,52 @@ cog.outl("```")
|
|||||||
## Enable/disable multi-line input mode with Meta-Enter to submit (default: False)
|
## Enable/disable multi-line input mode with Meta-Enter to submit (default: False)
|
||||||
#AIDER_MULTILINE=false
|
#AIDER_MULTILINE=false
|
||||||
|
|
||||||
|
## Enable/disable terminal bell notifications when LLM responses are ready (default: False)
|
||||||
|
#AIDER_NOTIFICATIONS=false
|
||||||
|
|
||||||
|
## Specify a command to run for notifications instead of the terminal bell. If not specified, a default command for your OS may be used.
|
||||||
|
#AIDER_NOTIFICATIONS_COMMAND=
|
||||||
|
|
||||||
## Enable/disable detection and offering to add URLs to chat (default: True)
|
## Enable/disable detection and offering to add URLs to chat (default: True)
|
||||||
#AIDER_DETECT_URLS=true
|
#AIDER_DETECT_URLS=true
|
||||||
|
|
||||||
## Specify which editor to use for the /editor command
|
## Specify which editor to use for the /editor command
|
||||||
#AIDER_EDITOR=
|
#AIDER_EDITOR=
|
||||||
|
|
||||||
|
############################
|
||||||
|
# Deprecated model settings:
|
||||||
|
|
||||||
|
## Use claude-3-opus-20240229 model for the main chat (deprecated, use --model)
|
||||||
|
#AIDER_OPUS=false
|
||||||
|
|
||||||
|
## Use anthropic/claude-3-7-sonnet-20250219 model for the main chat (deprecated, use --model)
|
||||||
|
#AIDER_SONNET=false
|
||||||
|
|
||||||
|
## Use claude-3-5-haiku-20241022 model for the main chat (deprecated, use --model)
|
||||||
|
#AIDER_HAIKU=false
|
||||||
|
|
||||||
|
## Use gpt-4-0613 model for the main chat (deprecated, use --model)
|
||||||
|
#AIDER_4=false
|
||||||
|
|
||||||
|
## Use gpt-4o model for the main chat (deprecated, use --model)
|
||||||
|
#AIDER_4O=false
|
||||||
|
|
||||||
|
## Use gpt-4o-mini model for the main chat (deprecated, use --model)
|
||||||
|
#AIDER_MINI=false
|
||||||
|
|
||||||
|
## Use gpt-4-1106-preview model for the main chat (deprecated, use --model)
|
||||||
|
#AIDER_4_TURBO=false
|
||||||
|
|
||||||
|
## Use gpt-3.5-turbo model for the main chat (deprecated, use --model)
|
||||||
|
#AIDER_35TURBO=false
|
||||||
|
|
||||||
|
## Use deepseek/deepseek-chat model for the main chat (deprecated, use --model)
|
||||||
|
#AIDER_DEEPSEEK=false
|
||||||
|
|
||||||
|
## Use o1-mini model for the main chat (deprecated, use --model)
|
||||||
|
#AIDER_O1_MINI=false
|
||||||
|
|
||||||
|
## Use o1-preview model for the main chat (deprecated, use --model)
|
||||||
|
#AIDER_O1_PREVIEW=false
|
||||||
```
|
```
|
||||||
<!--[[[end]]]-->
|
<!--[[[end]]]-->
|
||||||
|
|||||||
@@ -13,29 +13,52 @@ Model aliases allow you to create shorthand names for models you frequently use.
|
|||||||
You can define aliases when launching aider using the `--alias` option:
|
You can define aliases when launching aider using the `--alias` option:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
aider --alias "fast:gpt-3.5-turbo" --alias "smart:gpt-4"
|
aider --alias "fast:gpt-4o-mini" --alias "smart:o3-mini"
|
||||||
```
|
```
|
||||||
|
|
||||||
Multiple aliases can be defined by using the `--alias` option multiple times. Each alias definition should be in the format `alias:model-name`.
|
Multiple aliases can be defined by using the `--alias` option multiple times. Each alias definition should be in the format `alias:model-name`.
|
||||||
|
|
||||||
## Configuration File
|
## Configuration File
|
||||||
|
|
||||||
You can also define aliases in your [`.aider.conf.yml` file](https://aider.chat/docs/config/aider_conf.html):
|
Of course,
|
||||||
|
you can also define aliases in your [`.aider.conf.yml` file](https://aider.chat/docs/config/aider_conf.html):
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
alias:
|
alias:
|
||||||
- "fast:gpt-3.5-turbo"
|
- "fast:gpt-4o-mini"
|
||||||
- "smart:gpt-4"
|
- "smart:o3-mini"
|
||||||
- "hacker:claude-3-sonnet-20240229"
|
- "hacker:claude-3-sonnet-20240229"
|
||||||
```
|
```
|
||||||
|
|
||||||
## Using Aliases
|
## Using Aliases
|
||||||
|
|
||||||
Once defined, you can use the alias instead of the full model name:
|
Once defined, you can use the alias instead of the full model name from the command line:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
aider --model fast # Uses gpt-3.5-turbo
|
aider --model fast # Uses gpt-4o-mini
|
||||||
aider --model smart # Uses gpt-4
|
aider --model smart # Uses o3-mini
|
||||||
|
```
|
||||||
|
|
||||||
|
Or with the `/model` command in-chat:
|
||||||
|
|
||||||
|
```
|
||||||
|
Aider v0.75.3
|
||||||
|
Main model: anthropic/claude-3-7-sonnet-20250219 with diff edit format, prompt cache, infinite output
|
||||||
|
Weak model: claude-3-5-sonnet-20241022
|
||||||
|
Git repo: .git with 406 files
|
||||||
|
Repo-map: using 4096 tokens, files refresh
|
||||||
|
─────────────────────────────────────────────────────────────────────────────────────────────────────
|
||||||
|
> /model fast
|
||||||
|
|
||||||
|
Aider v0.75.3
|
||||||
|
Main model: gpt-4o-mini with diff edit format
|
||||||
|
─────────────────────────────────────────────────────────────────────────────────────────────────────
|
||||||
|
diff> /model smart
|
||||||
|
|
||||||
|
Aider v0.75.3
|
||||||
|
Main model: o3-mini with diff edit format
|
||||||
|
─────────────────────────────────────────────────────────────────────────────────────────────────────
|
||||||
|
>
|
||||||
```
|
```
|
||||||
|
|
||||||
## Built-in Aliases
|
## Built-in Aliases
|
||||||
@@ -59,7 +82,8 @@ for alias, model in sorted(MODEL_ALIASES.items()):
|
|||||||
- `flash`: gemini/gemini-2.0-flash-exp
|
- `flash`: gemini/gemini-2.0-flash-exp
|
||||||
- `haiku`: claude-3-5-haiku-20241022
|
- `haiku`: claude-3-5-haiku-20241022
|
||||||
- `opus`: claude-3-opus-20240229
|
- `opus`: claude-3-opus-20240229
|
||||||
- `sonnet`: claude-3-5-sonnet-20241022
|
- `r1`: deepseek/deepseek-reasoner
|
||||||
|
- `sonnet`: anthropic/claude-3-7-sonnet-20250219
|
||||||
<!--[[[end]]]-->
|
<!--[[[end]]]-->
|
||||||
|
|
||||||
## Priority
|
## Priority
|
||||||
|
|||||||
@@ -22,17 +22,15 @@ from aider.args import get_md_help
|
|||||||
cog.out(get_md_help())
|
cog.out(get_md_help())
|
||||||
]]]-->
|
]]]-->
|
||||||
```
|
```
|
||||||
usage: aider [-h] [--model] [--opus] [--sonnet] [--haiku] [--4]
|
usage: aider [-h] [--model] [--openai-api-key] [--anthropic-api-key]
|
||||||
[--4o] [--mini] [--4-turbo] [--35turbo] [--deepseek]
|
[--openai-api-base] [--openai-api-type]
|
||||||
[--o1-mini] [--o1-preview] [--openai-api-key]
|
[--openai-api-version] [--openai-api-deployment-id]
|
||||||
[--anthropic-api-key] [--openai-api-base]
|
[--openai-organization-id] [--set-env] [--api-key]
|
||||||
[--openai-api-type] [--openai-api-version]
|
[--list-models] [--model-settings-file]
|
||||||
[--openai-api-deployment-id] [--openai-organization-id]
|
[--model-metadata-file] [--alias] [--reasoning-effort]
|
||||||
[--set-env] [--api-key] [--list-models]
|
[--thinking-tokens] [--verify-ssl | --no-verify-ssl]
|
||||||
[--model-settings-file] [--model-metadata-file]
|
[--timeout] [--edit-format] [--architect]
|
||||||
[--alias] [--verify-ssl | --no-verify-ssl] [--timeout]
|
[--weak-model] [--editor-model] [--editor-edit-format]
|
||||||
[--edit-format] [--architect] [--weak-model]
|
|
||||||
[--editor-model] [--editor-edit-format]
|
|
||||||
[--show-model-warnings | --no-show-model-warnings]
|
[--show-model-warnings | --no-show-model-warnings]
|
||||||
[--max-chat-history-tokens]
|
[--max-chat-history-tokens]
|
||||||
[--cache-prompts | --no-cache-prompts]
|
[--cache-prompts | --no-cache-prompts]
|
||||||
@@ -73,11 +71,15 @@ usage: aider [-h] [--model] [--opus] [--sonnet] [--haiku] [--4]
|
|||||||
[--show-prompts] [--voice-format] [--voice-language]
|
[--show-prompts] [--voice-format] [--voice-language]
|
||||||
[--voice-input-device] [--file] [--read] [--vim]
|
[--voice-input-device] [--file] [--read] [--vim]
|
||||||
[--chat-language] [--yes-always] [-v] [--load]
|
[--chat-language] [--yes-always] [-v] [--load]
|
||||||
[--encoding] [-c] [--env-file]
|
[--encoding] [--line-endings] [-c] [--env-file]
|
||||||
[--suggest-shell-commands | --no-suggest-shell-commands]
|
[--suggest-shell-commands | --no-suggest-shell-commands]
|
||||||
[--fancy-input | --no-fancy-input]
|
[--fancy-input | --no-fancy-input]
|
||||||
[--multiline | --no-multiline]
|
[--multiline | --no-multiline]
|
||||||
[--detect-urls | --no-detect-urls] [--editor]
|
[--notifications | --no-notifications]
|
||||||
|
[--notifications-command]
|
||||||
|
[--detect-urls | --no-detect-urls] [--editor] [--opus]
|
||||||
|
[--sonnet] [--haiku] [--4] [--4o] [--mini] [--4-turbo]
|
||||||
|
[--35turbo] [--deepseek] [--o1-mini] [--o1-preview]
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -95,58 +97,6 @@ Aliases:
|
|||||||
Specify the model to use for the main chat
|
Specify the model to use for the main chat
|
||||||
Environment variable: `AIDER_MODEL`
|
Environment variable: `AIDER_MODEL`
|
||||||
|
|
||||||
### `--opus`
|
|
||||||
Use claude-3-opus-20240229 model for the main chat
|
|
||||||
Environment variable: `AIDER_OPUS`
|
|
||||||
|
|
||||||
### `--sonnet`
|
|
||||||
Use claude-3-5-sonnet-20241022 model for the main chat
|
|
||||||
Environment variable: `AIDER_SONNET`
|
|
||||||
|
|
||||||
### `--haiku`
|
|
||||||
Use claude-3-5-haiku-20241022 model for the main chat
|
|
||||||
Environment variable: `AIDER_HAIKU`
|
|
||||||
|
|
||||||
### `--4`
|
|
||||||
Use gpt-4-0613 model for the main chat
|
|
||||||
Environment variable: `AIDER_4`
|
|
||||||
Aliases:
|
|
||||||
- `--4`
|
|
||||||
- `-4`
|
|
||||||
|
|
||||||
### `--4o`
|
|
||||||
Use gpt-4o model for the main chat
|
|
||||||
Environment variable: `AIDER_4O`
|
|
||||||
|
|
||||||
### `--mini`
|
|
||||||
Use gpt-4o-mini model for the main chat
|
|
||||||
Environment variable: `AIDER_MINI`
|
|
||||||
|
|
||||||
### `--4-turbo`
|
|
||||||
Use gpt-4-1106-preview model for the main chat
|
|
||||||
Environment variable: `AIDER_4_TURBO`
|
|
||||||
|
|
||||||
### `--35turbo`
|
|
||||||
Use gpt-3.5-turbo model for the main chat
|
|
||||||
Environment variable: `AIDER_35TURBO`
|
|
||||||
Aliases:
|
|
||||||
- `--35turbo`
|
|
||||||
- `--35-turbo`
|
|
||||||
- `--3`
|
|
||||||
- `-3`
|
|
||||||
|
|
||||||
### `--deepseek`
|
|
||||||
Use deepseek/deepseek-chat model for the main chat
|
|
||||||
Environment variable: `AIDER_DEEPSEEK`
|
|
||||||
|
|
||||||
### `--o1-mini`
|
|
||||||
Use o1-mini model for the main chat
|
|
||||||
Environment variable: `AIDER_O1_MINI`
|
|
||||||
|
|
||||||
### `--o1-preview`
|
|
||||||
Use o1-preview model for the main chat
|
|
||||||
Environment variable: `AIDER_O1_PREVIEW`
|
|
||||||
|
|
||||||
## API Keys and settings:
|
## API Keys and settings:
|
||||||
|
|
||||||
### `--openai-api-key VALUE`
|
### `--openai-api-key VALUE`
|
||||||
@@ -210,6 +160,14 @@ Environment variable: `AIDER_MODEL_METADATA_FILE`
|
|||||||
Add a model alias (can be used multiple times)
|
Add a model alias (can be used multiple times)
|
||||||
Environment variable: `AIDER_ALIAS`
|
Environment variable: `AIDER_ALIAS`
|
||||||
|
|
||||||
|
### `--reasoning-effort VALUE`
|
||||||
|
Set the reasoning_effort API parameter (default: not set)
|
||||||
|
Environment variable: `AIDER_REASONING_EFFORT`
|
||||||
|
|
||||||
|
### `--thinking-tokens VALUE`
|
||||||
|
Set the thinking token budget for models that support it (default: not set)
|
||||||
|
Environment variable: `AIDER_THINKING_TOKENS`
|
||||||
|
|
||||||
### `--verify-ssl`
|
### `--verify-ssl`
|
||||||
Verify the SSL cert when connecting to models (default: True)
|
Verify the SSL cert when connecting to models (default: True)
|
||||||
Default: True
|
Default: True
|
||||||
@@ -705,6 +663,11 @@ Specify the encoding for input and output (default: utf-8)
|
|||||||
Default: utf-8
|
Default: utf-8
|
||||||
Environment variable: `AIDER_ENCODING`
|
Environment variable: `AIDER_ENCODING`
|
||||||
|
|
||||||
|
### `--line-endings VALUE`
|
||||||
|
Line endings to use when writing files (default: platform)
|
||||||
|
Default: platform
|
||||||
|
Environment variable: `AIDER_LINE_ENDINGS`
|
||||||
|
|
||||||
### `--config CONFIG_FILE`
|
### `--config CONFIG_FILE`
|
||||||
Specify the config file (default: search for .aider.conf.yml in git root, cwd or home directory)
|
Specify the config file (default: search for .aider.conf.yml in git root, cwd or home directory)
|
||||||
Aliases:
|
Aliases:
|
||||||
@@ -740,6 +703,18 @@ Aliases:
|
|||||||
- `--multiline`
|
- `--multiline`
|
||||||
- `--no-multiline`
|
- `--no-multiline`
|
||||||
|
|
||||||
|
### `--notifications`
|
||||||
|
Enable/disable terminal bell notifications when LLM responses are ready (default: False)
|
||||||
|
Default: False
|
||||||
|
Environment variable: `AIDER_NOTIFICATIONS`
|
||||||
|
Aliases:
|
||||||
|
- `--notifications`
|
||||||
|
- `--no-notifications`
|
||||||
|
|
||||||
|
### `--notifications-command COMMAND`
|
||||||
|
Specify a command to run for notifications instead of the terminal bell. If not specified, a default command for your OS may be used.
|
||||||
|
Environment variable: `AIDER_NOTIFICATIONS_COMMAND`
|
||||||
|
|
||||||
### `--detect-urls`
|
### `--detect-urls`
|
||||||
Enable/disable detection and offering to add URLs to chat (default: True)
|
Enable/disable detection and offering to add URLs to chat (default: True)
|
||||||
Default: True
|
Default: True
|
||||||
@@ -751,4 +726,69 @@ Aliases:
|
|||||||
### `--editor VALUE`
|
### `--editor VALUE`
|
||||||
Specify which editor to use for the /editor command
|
Specify which editor to use for the /editor command
|
||||||
Environment variable: `AIDER_EDITOR`
|
Environment variable: `AIDER_EDITOR`
|
||||||
|
|
||||||
|
## Deprecated model settings:
|
||||||
|
|
||||||
|
### `--opus`
|
||||||
|
Use claude-3-opus-20240229 model for the main chat (deprecated, use --model)
|
||||||
|
Default: False
|
||||||
|
Environment variable: `AIDER_OPUS`
|
||||||
|
|
||||||
|
### `--sonnet`
|
||||||
|
Use anthropic/claude-3-7-sonnet-20250219 model for the main chat (deprecated, use --model)
|
||||||
|
Default: False
|
||||||
|
Environment variable: `AIDER_SONNET`
|
||||||
|
|
||||||
|
### `--haiku`
|
||||||
|
Use claude-3-5-haiku-20241022 model for the main chat (deprecated, use --model)
|
||||||
|
Default: False
|
||||||
|
Environment variable: `AIDER_HAIKU`
|
||||||
|
|
||||||
|
### `--4`
|
||||||
|
Use gpt-4-0613 model for the main chat (deprecated, use --model)
|
||||||
|
Default: False
|
||||||
|
Environment variable: `AIDER_4`
|
||||||
|
Aliases:
|
||||||
|
- `--4`
|
||||||
|
- `-4`
|
||||||
|
|
||||||
|
### `--4o`
|
||||||
|
Use gpt-4o model for the main chat (deprecated, use --model)
|
||||||
|
Default: False
|
||||||
|
Environment variable: `AIDER_4O`
|
||||||
|
|
||||||
|
### `--mini`
|
||||||
|
Use gpt-4o-mini model for the main chat (deprecated, use --model)
|
||||||
|
Default: False
|
||||||
|
Environment variable: `AIDER_MINI`
|
||||||
|
|
||||||
|
### `--4-turbo`
|
||||||
|
Use gpt-4-1106-preview model for the main chat (deprecated, use --model)
|
||||||
|
Default: False
|
||||||
|
Environment variable: `AIDER_4_TURBO`
|
||||||
|
|
||||||
|
### `--35turbo`
|
||||||
|
Use gpt-3.5-turbo model for the main chat (deprecated, use --model)
|
||||||
|
Default: False
|
||||||
|
Environment variable: `AIDER_35TURBO`
|
||||||
|
Aliases:
|
||||||
|
- `--35turbo`
|
||||||
|
- `--35-turbo`
|
||||||
|
- `--3`
|
||||||
|
- `-3`
|
||||||
|
|
||||||
|
### `--deepseek`
|
||||||
|
Use deepseek/deepseek-chat model for the main chat (deprecated, use --model)
|
||||||
|
Default: False
|
||||||
|
Environment variable: `AIDER_DEEPSEEK`
|
||||||
|
|
||||||
|
### `--o1-mini`
|
||||||
|
Use o1-mini model for the main chat (deprecated, use --model)
|
||||||
|
Default: False
|
||||||
|
Environment variable: `AIDER_O1_MINI`
|
||||||
|
|
||||||
|
### `--o1-preview`
|
||||||
|
Use o1-preview model for the main chat (deprecated, use --model)
|
||||||
|
Default: False
|
||||||
|
Environment variable: `AIDER_O1_PREVIEW`
|
||||||
<!--[[[end]]]-->
|
<!--[[[end]]]-->
|
||||||
|
|||||||
106
aider/website/docs/config/reasoning.md
Normal file
106
aider/website/docs/config/reasoning.md
Normal file
@@ -0,0 +1,106 @@
|
|||||||
|
---
|
||||||
|
parent: Configuration
|
||||||
|
nav_order: 110
|
||||||
|
description: How to configure reasoning model settings from secondary providers.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Reasoning models
|
||||||
|
|
||||||
|
## Reasoning effort
|
||||||
|
|
||||||
|
You can use the `--reasoning-effort` switch to control the reasoning effort
|
||||||
|
of models which support this setting.
|
||||||
|
This switch is useful for OpenAI's reasoning models.
|
||||||
|
|
||||||
|
You can also use the `--thinking-tokens` switch to request
|
||||||
|
the model use a certain number of thinking tokens.
|
||||||
|
This switch is useful for Sonnet 3.7.
|
||||||
|
|
||||||
|
|
||||||
|
## Thinking tokens in XML tags
|
||||||
|
|
||||||
|
There is also a `reasoning_tag` setting, which takes the name of an XML tag
|
||||||
|
that the model uses to wrap its reasoning/thinking output.
|
||||||
|
|
||||||
|
For example when using DeepSeek R1 from Fireworks, the reasoning comes back inside
|
||||||
|
`<think>...</think>` tags, so aider's settings
|
||||||
|
include `reasoning_tag: think`.
|
||||||
|
|
||||||
|
```
|
||||||
|
<think>
|
||||||
|
The user wants me to greet them!
|
||||||
|
</think>
|
||||||
|
|
||||||
|
Hello!
|
||||||
|
```
|
||||||
|
|
||||||
|
Aider will display the thinking/reasoning output,
|
||||||
|
but it won't be used for file editing instructions, etc.
|
||||||
|
Aider will rely on the non-thinking output for instructions on how to make code changes, etc.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
- name: fireworks_ai/accounts/fireworks/models/deepseek-r1
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: fireworks_ai/accounts/fireworks/models/deepseek-v3
|
||||||
|
use_repo_map: true
|
||||||
|
extra_params:
|
||||||
|
max_tokens: 160000
|
||||||
|
use_temperature: false
|
||||||
|
editor_model_name: fireworks_ai/accounts/fireworks/models/deepseek-v3
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
reasoning_tag: think # <---
|
||||||
|
```
|
||||||
|
|
||||||
|
## Reasoning model limitations
|
||||||
|
|
||||||
|
Many
|
||||||
|
"reasoning" models have restrictions on how they can be used:
|
||||||
|
they sometimes prohibit streaming, use of temperature and/or the system prompt.
|
||||||
|
|
||||||
|
Aider is configured to work properly with these models
|
||||||
|
when served through major provider APIs.
|
||||||
|
|
||||||
|
You may need to [configure model settings](/docs/config/adv-model-settings.html)
|
||||||
|
if you are using them through another provider
|
||||||
|
and see errors related to temperature or system prompt.
|
||||||
|
|
||||||
|
Include settings for your new provider in `.aider.model.setting.yml` file
|
||||||
|
at the root of your project or in your home directory.
|
||||||
|
|
||||||
|
### Temperature, streaming and system prompt
|
||||||
|
|
||||||
|
You should find one of the existing model setting configuration entries
|
||||||
|
for the model you are interested in, say o3-mini:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
- name: o3-mini
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
use_temperature: false # <---
|
||||||
|
editor_model_name: gpt-4o
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
```
|
||||||
|
|
||||||
|
Pay attention to these settings, which must be set to `false`
|
||||||
|
for certain reasoning models:
|
||||||
|
|
||||||
|
- `use_temperature`
|
||||||
|
- `streaming`
|
||||||
|
- `use_system_prompt`
|
||||||
|
|
||||||
|
Here's an example of
|
||||||
|
the settings to use o3-mini via Azure.
|
||||||
|
Note that aider already has these settings pre-configured, but they
|
||||||
|
serve as a good example of how to adapt the main model
|
||||||
|
settings for a different provider.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
- name: azure/o3-mini
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: azure/gpt-4o-mini
|
||||||
|
use_repo_map: true
|
||||||
|
use_temperature: false # <---
|
||||||
|
editor_model_name: azure/gpt-4o
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
```
|
||||||
@@ -141,6 +141,18 @@ When starting a fresh aider session, you can include recent git history in the c
|
|||||||
|
|
||||||
Remember, the chat history already includes recent changes made during the current session, so this tip is most useful when starting a new aider session and you want to provide context about recent work.
|
Remember, the chat history already includes recent changes made during the current session, so this tip is most useful when starting a new aider session and you want to provide context about recent work.
|
||||||
|
|
||||||
|
You can also use aider to review PR branches:
|
||||||
|
|
||||||
|
```
|
||||||
|
/run git diff one-branch..another-branch
|
||||||
|
|
||||||
|
...
|
||||||
|
|
||||||
|
Add 6.9k tokens of command output to the chat? (Y)es/(N)o [Yes]: Yes
|
||||||
|
|
||||||
|
/ask Are there any problems with the way this change works with the FooBar class?
|
||||||
|
```
|
||||||
|
|
||||||
{: .tip }
|
{: .tip }
|
||||||
The `/git` command will not work for this purpose, as its output is not included in the chat.
|
The `/git` command will not work for this purpose, as its output is not included in the chat.
|
||||||
|
|
||||||
@@ -237,13 +249,15 @@ tr:hover { background-color: #f5f5f5; }
|
|||||||
</style>
|
</style>
|
||||||
<table>
|
<table>
|
||||||
<tr><th>Model Name</th><th class='right'>Total Tokens</th><th class='right'>Percent</th></tr>
|
<tr><th>Model Name</th><th class='right'>Total Tokens</th><th class='right'>Percent</th></tr>
|
||||||
<tr><td>deepseek/deepseek-coder</td><td class='right'>554,258</td><td class='right'>49.8%</td></tr>
|
<tr><td>anthropic/claude-3-7-sonnet-20250219</td><td class='right'>974,381</td><td class='right'>95.5%</td></tr>
|
||||||
<tr><td>deepseek/deepseek-chat</td><td class='right'>482,002</td><td class='right'>43.3%</td></tr>
|
<tr><td>openrouter/deepseek/deepseek-r1</td><td class='right'>40,786</td><td class='right'>4.0%</td></tr>
|
||||||
<tr><td>claude-3-5-sonnet-20241022</td><td class='right'>61,093</td><td class='right'>5.5%</td></tr>
|
<tr><td>groq/REDACTED</td><td class='right'>3,914</td><td class='right'>0.4%</td></tr>
|
||||||
<tr><td>gemini/gemini-1.5-flash-8b</td><td class='right'>8,297</td><td class='right'>0.7%</td></tr>
|
<tr><td>fireworks_ai/accounts/fireworks/models/deepseek-r1</td><td class='right'>1,398</td><td class='right'>0.1%</td></tr>
|
||||||
<tr><td>gemini/gemini-1.5-flash-002</td><td class='right'>4,964</td><td class='right'>0.4%</td></tr>
|
|
||||||
<tr><td>o1</td><td class='right'>2,590</td><td class='right'>0.2%</td></tr>
|
|
||||||
</table>
|
</table>
|
||||||
|
|
||||||
|
{: .note :}
|
||||||
|
Some models show as REDACTED, because they are new or unpopular models.
|
||||||
|
Aider's analytics only records the names of "well known" LLMs.
|
||||||
<!--[[[end]]]-->
|
<!--[[[end]]]-->
|
||||||
|
|
||||||
## How are the "aider wrote xx% of code" stats computed?
|
## How are the "aider wrote xx% of code" stats computed?
|
||||||
|
|||||||
@@ -96,14 +96,7 @@ to keep aider's dependencies separated.
|
|||||||
You can use pip to install aider with python versions 3.9-3.12.
|
You can use pip to install aider with python versions 3.9-3.12.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Install aider
|
|
||||||
python -m pip install -U --upgrade-strategy only-if-needed aider-chat
|
python -m pip install -U --upgrade-strategy only-if-needed aider-chat
|
||||||
|
|
||||||
# To work with GPT-4o:
|
|
||||||
aider --4o --openai-api-key sk-xxx...
|
|
||||||
|
|
||||||
# To work with Claude 3.5 Sonnet:
|
|
||||||
aider --sonnet --anthropic-api-key sk-xxx...
|
|
||||||
```
|
```
|
||||||
|
|
||||||
{% include python-m-aider.md %}
|
{% include python-m-aider.md %}
|
||||||
|
|||||||
@@ -17,21 +17,14 @@ Aider works best if you have git installed.
|
|||||||
Here are
|
Here are
|
||||||
[instructions for installing git in various environments](https://github.com/git-guides/install-git).
|
[instructions for installing git in various environments](https://github.com/git-guides/install-git).
|
||||||
|
|
||||||
## Get your API key
|
## Setup an API key
|
||||||
|
|
||||||
To work with OpenAI's models like GPT-4o or o1-preview you need a paid
|
You need an key from an API provider to work with most models:
|
||||||
[OpenAI API key](https://help.openai.com/en/articles/4936850-where-do-i-find-my-secret-api-key).
|
|
||||||
Note that this is different than being a "ChatGPT Plus" subscriber.
|
|
||||||
|
|
||||||
To work with Anthropic's models like Claude 3.5 Sonnet you need a paid
|
- [OpenAI](https://help.openai.com/en/articles/4936850-where-do-i-find-my-secret-api-key) provides o1, o3-mini, gpt-4o and other models. Note that paying for an API key is different than being a "ChatGPT" subscriber.
|
||||||
[Anthropic API key](https://docs.anthropic.com/claude/reference/getting-started-with-the-api).
|
- [Anthropic](https://docs.anthropic.com/claude/reference/getting-started-with-the-api) provides Claude 3.7 Sonnet and Haiku.
|
||||||
|
- [DeepSeek](https://platform.deepseek.com/api_keys) provides DeepSeek R1 and DeepSeek Chat V3.
|
||||||
|
- [OpenRouter](https://openrouter.ai/keys) allows you to access models from many providers using a single key.
|
||||||
### Working with other LLMs
|
|
||||||
|
|
||||||
{% include works-best.md %}
|
|
||||||
|
|
||||||
### Store your api keys
|
|
||||||
|
|
||||||
You can [store your api keys in configuration or env files](/docs/config/api-keys.html)
|
You can [store your api keys in configuration or env files](/docs/config/api-keys.html)
|
||||||
and they will be loaded automatically whenever you run aider.
|
and they will be loaded automatically whenever you run aider.
|
||||||
@@ -105,11 +98,3 @@ please let us know by opening a
|
|||||||
[GitHub issue](https://github.com/Aider-AI/aider/issues).
|
[GitHub issue](https://github.com/Aider-AI/aider/issues).
|
||||||
|
|
||||||
|
|
||||||
## Install the development version of aider
|
|
||||||
|
|
||||||
If you want the very latest development version of aider
|
|
||||||
you can install it like this:
|
|
||||||
|
|
||||||
```
|
|
||||||
aider --install-main-branch
|
|
||||||
```
|
|
||||||
|
|||||||
@@ -57,10 +57,10 @@ cog.out(get_supported_languages_md())
|
|||||||
|:--------:|:--------------:|:--------:|:------:|
|
|:--------:|:--------------:|:--------:|:------:|
|
||||||
| bash | .bash | | ✓ |
|
| bash | .bash | | ✓ |
|
||||||
| c | .c | ✓ | ✓ |
|
| c | .c | ✓ | ✓ |
|
||||||
| c_sharp | .cs | ✓ | ✓ |
|
|
||||||
| commonlisp | .cl | | ✓ |
|
| commonlisp | .cl | | ✓ |
|
||||||
| cpp | .cc | ✓ | ✓ |
|
| cpp | .cc | ✓ | ✓ |
|
||||||
| cpp | .cpp | ✓ | ✓ |
|
| cpp | .cpp | ✓ | ✓ |
|
||||||
|
| csharp | .cs | ✓ | ✓ |
|
||||||
| css | .css | | ✓ |
|
| css | .css | | ✓ |
|
||||||
| dockerfile | .dockerfile | | ✓ |
|
| dockerfile | .dockerfile | | ✓ |
|
||||||
| dot | .dot | | ✓ |
|
| dot | .dot | | ✓ |
|
||||||
@@ -73,7 +73,8 @@ cog.out(get_supported_languages_md())
|
|||||||
| gomod | .gomod | | ✓ |
|
| gomod | .gomod | | ✓ |
|
||||||
| hack | .hack | | ✓ |
|
| hack | .hack | | ✓ |
|
||||||
| haskell | .hs | | ✓ |
|
| haskell | .hs | | ✓ |
|
||||||
| hcl | .hcl | | ✓ |
|
| hcl | .hcl | ✓ | ✓ |
|
||||||
|
| hcl | .tf | ✓ | ✓ |
|
||||||
| html | .html | | ✓ |
|
| html | .html | | ✓ |
|
||||||
| java | .java | ✓ | ✓ |
|
| java | .java | ✓ | ✓ |
|
||||||
| javascript | .js | ✓ | ✓ |
|
| javascript | .js | ✓ | ✓ |
|
||||||
@@ -81,15 +82,14 @@ cog.out(get_supported_languages_md())
|
|||||||
| jsdoc | .jsdoc | | ✓ |
|
| jsdoc | .jsdoc | | ✓ |
|
||||||
| json | .json | | ✓ |
|
| json | .json | | ✓ |
|
||||||
| julia | .jl | | ✓ |
|
| julia | .jl | | ✓ |
|
||||||
| kotlin | .kt | | ✓ |
|
| kotlin | .kt | ✓ | ✓ |
|
||||||
| lua | .lua | | ✓ |
|
| lua | .lua | | ✓ |
|
||||||
| make | .mk | | ✓ |
|
| make | .mk | | ✓ |
|
||||||
|
| markdown | .md | | ✓ |
|
||||||
| objc | .m | | ✓ |
|
| objc | .m | | ✓ |
|
||||||
| ocaml | .ml | ✓ | ✓ |
|
|
||||||
| perl | .pl | | ✓ |
|
| perl | .pl | | ✓ |
|
||||||
| php | .php | ✓ | ✓ |
|
| php | .php | ✓ | ✓ |
|
||||||
| python | .py | ✓ | ✓ |
|
| python | .py | ✓ | ✓ |
|
||||||
| ql | .ql | ✓ | ✓ |
|
|
||||||
| r | .R | | ✓ |
|
| r | .R | | ✓ |
|
||||||
| r | .r | | ✓ |
|
| r | .r | | ✓ |
|
||||||
| regex | .regex | | ✓ |
|
| regex | .regex | | ✓ |
|
||||||
|
|||||||
@@ -113,9 +113,8 @@ import subprocess
|
|||||||
import datetime
|
import datetime
|
||||||
|
|
||||||
files = [
|
files = [
|
||||||
'aider/website/docs/leaderboards/index.md',
|
'aider/website/docs/leaderboards/edit.md',
|
||||||
'aider/website/_data/edit_leaderboard.yml',
|
'aider/website/_data/edit_leaderboard.yml',
|
||||||
'aider/website/_data/refactor_leaderboard.yml'
|
|
||||||
]
|
]
|
||||||
|
|
||||||
def get_last_modified_date(file):
|
def get_last_modified_date(file):
|
||||||
@@ -129,6 +128,6 @@ mod_dates = [get_last_modified_date(file) for file in files]
|
|||||||
latest_mod_date = max(mod_dates)
|
latest_mod_date = max(mod_dates)
|
||||||
cog.out(f"{latest_mod_date.strftime('%B %d, %Y.')}")
|
cog.out(f"{latest_mod_date.strftime('%B %d, %Y.')}")
|
||||||
]]]-->
|
]]]-->
|
||||||
December 16, 2024.
|
January 16, 2025.
|
||||||
<!--[[[end]]]-->
|
<!--[[[end]]]-->
|
||||||
</p>
|
</p>
|
||||||
|
|||||||
@@ -19,16 +19,9 @@ While [aider can connect to almost any LLM](/docs/llms.html),
|
|||||||
it works best with models that score well on the benchmarks.
|
it works best with models that score well on the benchmarks.
|
||||||
|
|
||||||
|
|
||||||
{: .note :}
|
|
||||||
The
|
|
||||||
[original aider code editing leaderboard](edit.html)
|
|
||||||
has been replaced by this
|
|
||||||
new, much more challenging
|
|
||||||
[polyglot leaderboard](https://aider.chat/2024/12/21/polyglot.html).
|
|
||||||
|
|
||||||
## Polyglot leaderboard
|
## Polyglot leaderboard
|
||||||
|
|
||||||
[Aider's polyglot benchmark](/docs/benchmarks.html#the-benchmark)
|
[Aider's polyglot benchmark](https://aider.chat/2024/12/21/polyglot.html#the-polyglot-benchmark)
|
||||||
asks the LLM to edit source files to complete 225 coding exercises
|
asks the LLM to edit source files to complete 225 coding exercises
|
||||||
from Exercism.
|
from Exercism.
|
||||||
It contains exercises in many popular programming languages:
|
It contains exercises in many popular programming languages:
|
||||||
@@ -52,6 +45,7 @@ The model also has to successfully apply all its changes to the source file with
|
|||||||
<th style="padding: 8px; text-align: center;">Percent using correct edit format</th>
|
<th style="padding: 8px; text-align: center;">Percent using correct edit format</th>
|
||||||
<th style="padding: 8px; text-align: left;">Command</th>
|
<th style="padding: 8px; text-align: left;">Command</th>
|
||||||
<th style="padding: 8px; text-align: center;">Edit format</th>
|
<th style="padding: 8px; text-align: center;">Edit format</th>
|
||||||
|
<th style="padding: 8px; text-align: center;">Total Cost</th>
|
||||||
</tr>
|
</tr>
|
||||||
</thead>
|
</thead>
|
||||||
<tbody>
|
<tbody>
|
||||||
@@ -63,6 +57,7 @@ The model also has to successfully apply all its changes to the source file with
|
|||||||
<td style="padding: 8px; text-align: center;">{{ row.percent_cases_well_formed }}%</td>
|
<td style="padding: 8px; text-align: center;">{{ row.percent_cases_well_formed }}%</td>
|
||||||
<td style="padding: 8px;"><code>{{ row.command }}</code></td>
|
<td style="padding: 8px;"><code>{{ row.command }}</code></td>
|
||||||
<td style="padding: 8px; text-align: center;">{{ row.edit_format }}</td>
|
<td style="padding: 8px; text-align: center;">{{ row.edit_format }}</td>
|
||||||
|
<td style="padding: 8px; text-align: center;">{% if row.total_cost == 0 %}?{% else %}${{ row.total_cost | times: 1.0 | round: 2 }}{% endif %}</td>
|
||||||
</tr>
|
</tr>
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
</tbody>
|
</tbody>
|
||||||
@@ -76,7 +71,7 @@ The model also has to successfully apply all its changes to the source file with
|
|||||||
<script>
|
<script>
|
||||||
{% assign data_source = edit_sorted %}
|
{% assign data_source = edit_sorted %}
|
||||||
{% assign pass_rate_field = "pass_rate_2" %}
|
{% assign pass_rate_field = "pass_rate_2" %}
|
||||||
{% assign highlight_model = "xxxxxxxxxxx" %}
|
{% assign highlight_model = "xxxxxx" %}
|
||||||
{% include leaderboard.js %}
|
{% include leaderboard.js %}
|
||||||
</script>
|
</script>
|
||||||
<style>
|
<style>
|
||||||
@@ -107,8 +102,7 @@ import datetime
|
|||||||
|
|
||||||
files = [
|
files = [
|
||||||
'aider/website/docs/leaderboards/index.md',
|
'aider/website/docs/leaderboards/index.md',
|
||||||
'aider/website/_data/edit_leaderboard.yml',
|
'aider/website/_data/polyglot_leaderboard.yml',
|
||||||
'aider/website/_data/refactor_leaderboard.yml'
|
|
||||||
]
|
]
|
||||||
|
|
||||||
def get_last_modified_date(file):
|
def get_last_modified_date(file):
|
||||||
@@ -122,6 +116,6 @@ mod_dates = [get_last_modified_date(file) for file in files]
|
|||||||
latest_mod_date = max(mod_dates)
|
latest_mod_date = max(mod_dates)
|
||||||
cog.out(f"{latest_mod_date.strftime('%B %d, %Y.')}")
|
cog.out(f"{latest_mod_date.strftime('%B %d, %Y.')}")
|
||||||
]]]-->
|
]]]-->
|
||||||
December 26, 2024.
|
March 07, 2025.
|
||||||
<!--[[[end]]]-->
|
<!--[[[end]]]-->
|
||||||
</p>
|
</p>
|
||||||
|
|||||||
@@ -5,6 +5,15 @@ nav_order: 800
|
|||||||
|
|
||||||
# Benchmark notes
|
# Benchmark notes
|
||||||
|
|
||||||
|
## Notes on pricing
|
||||||
|
|
||||||
|
All pricing information is the cost to run the benchmark at the time it was
|
||||||
|
run.
|
||||||
|
Providers change their pricing, and every benchmark run ends up with a slightly
|
||||||
|
different cost.
|
||||||
|
Pricing is provided on a *best efforts* basis, and may not always be current
|
||||||
|
or fully accurate.
|
||||||
|
|
||||||
## Notes on benchmarking results
|
## Notes on benchmarking results
|
||||||
|
|
||||||
The key benchmarking results are:
|
The key benchmarking results are:
|
||||||
|
|||||||
@@ -50,3 +50,29 @@ Therefore, results are available for fewer models.
|
|||||||
</script>
|
</script>
|
||||||
|
|
||||||
|
|
||||||
|
<p class="post-date">
|
||||||
|
By Paul Gauthier,
|
||||||
|
last updated
|
||||||
|
<!--[[[cog
|
||||||
|
import subprocess
|
||||||
|
import datetime
|
||||||
|
|
||||||
|
files = [
|
||||||
|
'aider/website/docs/leaderboards/refactor.md',
|
||||||
|
'aider/website/_data/refactor_leaderboard.yml',
|
||||||
|
]
|
||||||
|
|
||||||
|
def get_last_modified_date(file):
|
||||||
|
result = subprocess.run(['git', 'log', '-1', '--format=%ct', file], capture_output=True, text=True)
|
||||||
|
if result.returncode == 0:
|
||||||
|
timestamp = int(result.stdout.strip())
|
||||||
|
return datetime.datetime.fromtimestamp(timestamp)
|
||||||
|
return datetime.datetime.min
|
||||||
|
|
||||||
|
mod_dates = [get_last_modified_date(file) for file in files]
|
||||||
|
latest_mod_date = max(mod_dates)
|
||||||
|
cog.out(f"{latest_mod_date.strftime('%B %d, %Y.')}")
|
||||||
|
]]]-->
|
||||||
|
January 16, 2025.
|
||||||
|
<!--[[[end]]]-->
|
||||||
|
</p>
|
||||||
|
|||||||
@@ -16,10 +16,9 @@ description: Aider can connect to most LLMs for AI pair programming.
|
|||||||
|
|
||||||
Aider works best with these models, which are skilled at editing code:
|
Aider works best with these models, which are skilled at editing code:
|
||||||
|
|
||||||
- [GPT-4o](/docs/llms/openai.html)
|
- [DeepSeek R1 and V3](/docs/llms/deepseek.html)
|
||||||
- [Claude 3.5 Sonnet](/docs/llms/anthropic.html)
|
- [Claude 3.7 Sonnet](/docs/llms/anthropic.html)
|
||||||
- [Claude 3 Opus](/docs/llms/anthropic.html)
|
- [OpenAI o1, o3-mini and GPT-4o](/docs/llms/openai.html)
|
||||||
- [DeepSeek Coder V2](/docs/llms/deepseek.html)
|
|
||||||
|
|
||||||
|
|
||||||
## Free models
|
## Free models
|
||||||
|
|||||||
@@ -19,11 +19,11 @@ python -m pip install -U aider-chat
|
|||||||
export ANTHROPIC_API_KEY=<key> # Mac/Linux
|
export ANTHROPIC_API_KEY=<key> # Mac/Linux
|
||||||
setx ANTHROPIC_API_KEY <key> # Windows, restart shell after setx
|
setx ANTHROPIC_API_KEY <key> # Windows, restart shell after setx
|
||||||
|
|
||||||
# Aider uses Claude 3.5 Sonnet by default (or use --sonnet)
|
# Aider uses Claude 3.7 Sonnet by default
|
||||||
aider
|
aider
|
||||||
|
|
||||||
# Claude 3 Opus
|
# Claude 3 Opus
|
||||||
aider --opus
|
aider --model claude-3-opus-20240229
|
||||||
|
|
||||||
# List models available from Anthropic
|
# List models available from Anthropic
|
||||||
aider --list-models anthropic/
|
aider --list-models anthropic/
|
||||||
@@ -39,3 +39,34 @@ with more generous rate limits.
|
|||||||
You can use `aider --model <model-name>` to use any other Anthropic model.
|
You can use `aider --model <model-name>` to use any other Anthropic model.
|
||||||
For example, if you want to use a specific version of Opus
|
For example, if you want to use a specific version of Opus
|
||||||
you could do `aider --model claude-3-opus-20240229`.
|
you could do `aider --model claude-3-opus-20240229`.
|
||||||
|
|
||||||
|
## Thinking tokens
|
||||||
|
|
||||||
|
Aider can work with Sonnet 3.7's new thinking tokens, but does not ask Sonnet to use
|
||||||
|
thinking tokens by default.
|
||||||
|
|
||||||
|
Enabling thinking currently requires manual configuration.
|
||||||
|
You need to add the following to your `.aider.model.settings.yml`
|
||||||
|
[model settings file](/docs/config/adv-model-settings.html#model-settings).
|
||||||
|
Adjust the `budget_tokens` value to change the target number of thinking tokens.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
- name: anthropic/claude-3-7-sonnet-20250219
|
||||||
|
edit_format: diff
|
||||||
|
weak_model_name: anthropic/claude-3-5-haiku-20241022
|
||||||
|
use_repo_map: true
|
||||||
|
examples_as_sys_msg: true
|
||||||
|
use_temperature: false
|
||||||
|
extra_params:
|
||||||
|
extra_headers:
|
||||||
|
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
|
||||||
|
max_tokens: 64000
|
||||||
|
thinking:
|
||||||
|
type: enabled
|
||||||
|
budget_tokens: 32000 # Adjust this number
|
||||||
|
cache_control: true
|
||||||
|
editor_model_name: anthropic/claude-3-7-sonnet-20250219
|
||||||
|
editor_edit_format: editor-diff
|
||||||
|
```
|
||||||
|
|
||||||
|
More streamlined support will be coming soon.
|
||||||
|
|||||||
@@ -11,6 +11,32 @@ You will need to have an AWS account with access to the Bedrock service.
|
|||||||
To configure Aider to use the Amazon Bedrock API, you need to set up your AWS credentials.
|
To configure Aider to use the Amazon Bedrock API, you need to set up your AWS credentials.
|
||||||
This can be done using the AWS CLI or by setting environment variables.
|
This can be done using the AWS CLI or by setting environment variables.
|
||||||
|
|
||||||
|
## Select a Model from Amazon Bedrock
|
||||||
|
|
||||||
|
Before you can use a model through Amazon Bedrock, you must "enable" the model under the **Model
|
||||||
|
Access** screen in the AWS Management Console.
|
||||||
|
To find the `Model ID`, open the **Model Catalog** area in the Bedrock console, select the model
|
||||||
|
you want to use, and the find the `modelId` property under the "Usage" heading.
|
||||||
|
|
||||||
|
### Bedrock Inference Profiles
|
||||||
|
|
||||||
|
Amazon Bedrock has added support for a new feature called [cross-region "inference profiles."](https://aws.amazon.com/about-aws/whats-new/2024/09/amazon-bedrock-knowledge-bases-cross-region-inference/)
|
||||||
|
Some models hosted in Bedrock _only_ support these inference profiles.
|
||||||
|
If you're using one of these models, then you will need to use the `Inference Profile ID`
|
||||||
|
instead of the `Model ID` from the **Model Catalog** screen, in the AWS Management Console.
|
||||||
|
For example, the Claude Sonnet 3.7 model, release in February 2025, exclusively supports
|
||||||
|
inference through inference profiles. To use this model, you would use the
|
||||||
|
`us.anthropic.claude-3-7-sonnet-20250219-v1:0` Inference Profile ID.
|
||||||
|
In the Amazon Bedrock console, go to Inference and Assessment ➡️ Cross-region Inference
|
||||||
|
to find the `Inference Profile ID` value.
|
||||||
|
|
||||||
|
If you attempt to use a `Model ID` for a model that exclusively supports the Inference Profile
|
||||||
|
feature, you will receive an error message like the following:
|
||||||
|
|
||||||
|
> litellm.BadRequestError: BedrockException - b'{"message":"Invocation of model ID
|
||||||
|
anthropic.claude-3-7-sonnet-20250219-v1:0 with on-demand throughput isn\xe2\x80\x99t supported. Retry your
|
||||||
|
request with the ID or ARN of an inference profile that contains this model."}'
|
||||||
|
|
||||||
## AWS CLI Configuration
|
## AWS CLI Configuration
|
||||||
|
|
||||||
If you haven't already, install the [AWS CLI](https://aws.amazon.com/cli/) and configure it with your credentials:
|
If you haven't already, install the [AWS CLI](https://aws.amazon.com/cli/) and configure it with your credentials:
|
||||||
@@ -39,6 +65,16 @@ export AWS_PROFILE=your-profile
|
|||||||
You can add these to your
|
You can add these to your
|
||||||
[.env file](/docs/config/dotenv.html).
|
[.env file](/docs/config/dotenv.html).
|
||||||
|
|
||||||
|
### Set Environment Variables with PowerShell
|
||||||
|
|
||||||
|
If you're using PowerShell on MacOS, Linux, or Windows, you can set the same AWS configuration environment variables with these commands.
|
||||||
|
|
||||||
|
```pwsh
|
||||||
|
$env:AWS_ACCESS_KEY_ID = 'your_access_key'
|
||||||
|
$env:AWS_SECRET_ACCESS_KEY = 'your_secret_key'
|
||||||
|
$env:AWS_REGION = 'us-west-2' # Put whichever AWS region that you'd like, that the Bedrock service supports.
|
||||||
|
```
|
||||||
|
|
||||||
## Install boto3
|
## Install boto3
|
||||||
|
|
||||||
The AWS Bedrock provider requires the `boto3` package in order to function correctly:
|
The AWS Bedrock provider requires the `boto3` package in order to function correctly:
|
||||||
|
|||||||
@@ -6,7 +6,8 @@ nav_order: 500
|
|||||||
# DeepSeek
|
# DeepSeek
|
||||||
|
|
||||||
Aider can connect to the DeepSeek.com API.
|
Aider can connect to the DeepSeek.com API.
|
||||||
The DeepSeek Coder V2 model has a top score on aider's code editing benchmark.
|
To work with DeepSeek's models, you need to set the `DEEPSEEK_API_KEY` environment variable with your [DeepSeek API key](https://platform.deepseek.com/api_keys).
|
||||||
|
The DeepSeek Chat V3 model has a top score on aider's code editing benchmark.
|
||||||
|
|
||||||
```
|
```
|
||||||
python -m pip install -U aider-chat
|
python -m pip install -U aider-chat
|
||||||
@@ -14,7 +15,7 @@ python -m pip install -U aider-chat
|
|||||||
export DEEPSEEK_API_KEY=<key> # Mac/Linux
|
export DEEPSEEK_API_KEY=<key> # Mac/Linux
|
||||||
setx DEEPSEEK_API_KEY <key> # Windows, restart shell after setx
|
setx DEEPSEEK_API_KEY <key> # Windows, restart shell after setx
|
||||||
|
|
||||||
# Use DeepSeek Coder V2
|
# Use DeepSeek Chat v3
|
||||||
aider --deepseek
|
aider --model deepseek/deepseek-chat
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
@@ -10,16 +10,18 @@ To use LM Studio:
|
|||||||
```
|
```
|
||||||
python -m pip install -U aider-chat
|
python -m pip install -U aider-chat
|
||||||
|
|
||||||
export LM_STUDIO_API_KEY=<key> # Mac/Linux
|
# Must set a value here even if its a dummy value
|
||||||
setx LM_STUDIO_API_KEY <key> # Windows, restart shell after setx
|
export LM_STUDIO_API_KEY=dummy-api-key # Mac/Linux
|
||||||
|
setx LM_STUDIO_API_KEY dummy-api-key # Windows, restart shell after setx
|
||||||
|
|
||||||
export LM_STUDIO_API_BASE=<url> # Mac/Linux
|
# LM Studio default server URL is http://localhost:1234/v1
|
||||||
setx LM_STUDIO_API_BASE <url> # Windows, restart shell after setx
|
export LM_STUDIO_API_BASE=http://localhost:1234/v1 # Mac/Linux
|
||||||
|
setx LM_STUDIO_API_BASE http://localhost:1234/v1 # Windows, restart shell after setx
|
||||||
|
|
||||||
aider --model lm_studio/<your-model-name>
|
aider --model lm_studio/<your-model-name>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Note:** Even though LM Studio doesn't require an API Key out of the box the `LM_STUDIO_API_KEY` must have a dummy value like `dummy-api-key` set or the client request will fail trying to send an empty `Bearer` token.
|
||||||
|
|
||||||
See the [model warnings](warnings.html)
|
See the [model warnings](warnings.html)
|
||||||
section for information on warnings which will occur
|
section for information on warnings which will occur
|
||||||
|
|||||||
@@ -44,25 +44,22 @@ setx OLLAMA_API_KEY <api-key> # Windows, restart shell after setx
|
|||||||
|
|
||||||
[Ollama uses a 2k context window by default](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-specify-the-context-window-size),
|
[Ollama uses a 2k context window by default](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-specify-the-context-window-size),
|
||||||
which is very small for working with aider.
|
which is very small for working with aider.
|
||||||
|
It also **silently** discards context that exceeds the window.
|
||||||
|
This is especially dangerous because many users don't even realize that most of their data
|
||||||
|
is being discarded by Ollama.
|
||||||
|
|
||||||
|
By default, aider sets Ollama's context window
|
||||||
|
to be large enough for each request you send plus 8k tokens for the reply.
|
||||||
|
This ensures data isn't silently discarded by Ollama.
|
||||||
|
|
||||||
Aider sets Ollama's context window to 8k by default.
|
If you'd like you can configure a fixed sized context window instead
|
||||||
If you would like
|
with an
|
||||||
a larger context window
|
|
||||||
you can use a
|
|
||||||
[`.aider.model.settings.yml` file](https://aider.chat/docs/config/adv-model-settings.html#model-settings)
|
[`.aider.model.settings.yml` file](https://aider.chat/docs/config/adv-model-settings.html#model-settings)
|
||||||
like this:
|
like this:
|
||||||
|
|
||||||
```
|
```
|
||||||
- name: ollama/qwen2.5-coder:32b-instruct-fp16
|
- name: ollama/qwen2.5-coder:32b-instruct-fp16
|
||||||
extra_params:
|
extra_params:
|
||||||
num_ctx: 8192
|
num_ctx: 65536
|
||||||
```
|
```
|
||||||
|
|
||||||
Unlike most other LLM servers, Ollama does not throw an error if you submit
|
|
||||||
a request that exceeds the context window.
|
|
||||||
Instead, it just silently truncates the request by discarding the "oldest" messages
|
|
||||||
in the chat to make it fit within the context window.
|
|
||||||
So if your context window is too small, you won't get an error.
|
|
||||||
Aider will probably just fail to work well and experience
|
|
||||||
a lot of
|
|
||||||
[file editing problems](https://aider.chat/docs/troubleshooting/edit-errors.html).
|
|
||||||
|
|||||||
@@ -8,7 +8,8 @@ nav_order: 500
|
|||||||
Aider can connect to any LLM which is accessible via an OpenAI compatible API endpoint.
|
Aider can connect to any LLM which is accessible via an OpenAI compatible API endpoint.
|
||||||
|
|
||||||
```
|
```
|
||||||
python -m pip install -U aider-chat
|
python -m pip install aider-install
|
||||||
|
aider-install
|
||||||
|
|
||||||
# Mac/Linux:
|
# Mac/Linux:
|
||||||
export OPENAI_API_BASE=<endpoint>
|
export OPENAI_API_BASE=<endpoint>
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ nav_order: 100
|
|||||||
To work with OpenAI's models, you need to provide your
|
To work with OpenAI's models, you need to provide your
|
||||||
[OpenAI API key](https://help.openai.com/en/articles/4936850-where-do-i-find-my-secret-api-key)
|
[OpenAI API key](https://help.openai.com/en/articles/4936850-where-do-i-find-my-secret-api-key)
|
||||||
either in the `OPENAI_API_KEY` environment variable or
|
either in the `OPENAI_API_KEY` environment variable or
|
||||||
via the `--openai-api-key` command line switch.
|
via the `--api-key openai=<key>` command line switch.
|
||||||
|
|
||||||
Aider has some built in shortcuts for the most popular OpenAI models and
|
Aider has some built in shortcuts for the most popular OpenAI models and
|
||||||
has been tested and benchmarked to work well with them:
|
has been tested and benchmarked to work well with them:
|
||||||
@@ -16,28 +16,36 @@ has been tested and benchmarked to work well with them:
|
|||||||
```
|
```
|
||||||
python -m pip install -U aider-chat
|
python -m pip install -U aider-chat
|
||||||
|
|
||||||
export OPENAI_API_KEY=<key> # Mac/Linux
|
# o3-mini
|
||||||
setx OPENAI_API_KEY <key> # Windows, restart shell after setx
|
aider --model o3-mini --api-key openai=<key>
|
||||||
|
|
||||||
# Aider uses gpt-4o by default (or use --4o)
|
|
||||||
aider
|
|
||||||
|
|
||||||
# GPT-4o
|
|
||||||
aider --4o
|
|
||||||
|
|
||||||
# GPT-3.5 Turbo
|
|
||||||
aider --35-turbo
|
|
||||||
|
|
||||||
# o1-mini
|
# o1-mini
|
||||||
aider --model o1-mini
|
aider --model o1-mini --api-key openai=<key>
|
||||||
|
|
||||||
# o1-preview
|
# GPT-4o
|
||||||
aider --model o1-preview
|
aider --model gpt-4o --api-key openai=<key>
|
||||||
|
|
||||||
# List models available from OpenAI
|
# List models available from OpenAI
|
||||||
aider --list-models openai/
|
aider --list-models openai/
|
||||||
|
|
||||||
|
# You can also store you API key in environment variables (or .env)
|
||||||
|
export OPENAI_API_KEY=<key> # Mac/Linux
|
||||||
|
setx OPENAI_API_KEY <key> # Windows, restart shell after setx
|
||||||
```
|
```
|
||||||
|
|
||||||
You can use `aider --model <model-name>` to use any other OpenAI model.
|
You can use `aider --model <model-name>` to use any other OpenAI model.
|
||||||
For example, if you want to use a specific version of GPT-4 Turbo
|
For example, if you want to use a specific version of GPT-4 Turbo
|
||||||
you could do `aider --model gpt-4-0125-preview`.
|
you could do `aider --model gpt-4-0125-preview`.
|
||||||
|
|
||||||
|
## Reasoning models from other providers
|
||||||
|
|
||||||
|
Many of OpenAI's
|
||||||
|
"reasoning" models have restrictions on streaming and setting the temperature parameter.
|
||||||
|
Some also support different levels of "reasoning effort".
|
||||||
|
Aider is configured to work properly with these models
|
||||||
|
when served through major provider APIs and
|
||||||
|
has a `--reasoning-effort` setting.
|
||||||
|
|
||||||
|
You may need to [configure reasoning model settings](/docs/config/reasoning.html)
|
||||||
|
if you are using them through another provider
|
||||||
|
and see errors related to temperature or system prompt.
|
||||||
|
|||||||
@@ -29,7 +29,7 @@ python -m pip install -U aider-chat
|
|||||||
export OPENROUTER_API_KEY=<key> # Mac/Linux
|
export OPENROUTER_API_KEY=<key> # Mac/Linux
|
||||||
setx OPENROUTER_API_KEY <key> # Windows, restart shell after setx
|
setx OPENROUTER_API_KEY <key> # Windows, restart shell after setx
|
||||||
|
|
||||||
aider --model openrouter/anthropic/claude-3.5-sonnet
|
aider --model openrouter/anthropic/claude-3.7-sonnet
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
@@ -39,5 +39,39 @@ If you get errors, check your
|
|||||||
Be sure to "enable providers that may train on inputs"
|
Be sure to "enable providers that may train on inputs"
|
||||||
to allow use of all models.
|
to allow use of all models.
|
||||||
|
|
||||||
|
## Controlling provider selection
|
||||||
|
|
||||||
|
OpenRouter often has multiple providers serving each model.
|
||||||
|
You can control which OpenRouter providers are used for your requests in two ways:
|
||||||
|
|
||||||
|
1. By "ignoring" certain providers in your
|
||||||
|
[OpenRouter account settings](https://openrouter.ai/settings/preferences).
|
||||||
|
This disables those named providers across all the models that you access via OpenRouter.
|
||||||
|
|
||||||
|
2. By configuring "provider routing" in a `.aider.model.settings.yml` file.
|
||||||
|
|
||||||
|
Place that file in your home directory or the root of your git project, with
|
||||||
|
entries like this:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
- name: openrouter/anthropic/claude-3.7-sonnet
|
||||||
|
extra_params:
|
||||||
|
extra_body:
|
||||||
|
provider:
|
||||||
|
# Only use these providers, in this order
|
||||||
|
order: ["Anthropic", "Together"]
|
||||||
|
# Don't fall back to other providers
|
||||||
|
allow_fallbacks: false
|
||||||
|
# Skip providers that may train on inputs
|
||||||
|
data_collection: "deny"
|
||||||
|
# Only use providers supporting all parameters
|
||||||
|
require_parameters: true
|
||||||
|
```
|
||||||
|
|
||||||
|
See [OpenRouter's provider routing docs](https://openrouter.ai/docs/provider-routing) for full details on these settings.
|
||||||
|
|
||||||
|
See [Advanced model settings](https://aider.chat/docs/config/adv-model-settings.html#model-settings)
|
||||||
|
for more details about model settings files.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -57,16 +57,24 @@ cog.out(model_list)
|
|||||||
]]]-->
|
]]]-->
|
||||||
- anthropic.claude-3-5-haiku-20241022-v1:0
|
- anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
- anthropic.claude-3-5-sonnet-20241022-v2:0
|
- anthropic.claude-3-5-sonnet-20241022-v2:0
|
||||||
|
- anthropic.claude-3-7-sonnet-20250219-v1:0
|
||||||
- claude-3-5-haiku-20241022
|
- claude-3-5-haiku-20241022
|
||||||
|
- claude-3-5-haiku-latest
|
||||||
- claude-3-5-sonnet-20240620
|
- claude-3-5-sonnet-20240620
|
||||||
- claude-3-5-sonnet-20241022
|
- claude-3-5-sonnet-20241022
|
||||||
|
- claude-3-5-sonnet-latest
|
||||||
|
- claude-3-7-sonnet-20250219
|
||||||
|
- claude-3-7-sonnet-latest
|
||||||
- claude-3-haiku-20240307
|
- claude-3-haiku-20240307
|
||||||
- claude-3-opus-20240229
|
- claude-3-opus-20240229
|
||||||
|
- claude-3-opus-latest
|
||||||
- claude-3-sonnet-20240229
|
- claude-3-sonnet-20240229
|
||||||
- codestral/codestral-2405
|
- codestral/codestral-2405
|
||||||
- codestral/codestral-latest
|
- codestral/codestral-latest
|
||||||
- deepseek/deepseek-chat
|
- deepseek/deepseek-chat
|
||||||
- deepseek/deepseek-coder
|
- deepseek/deepseek-coder
|
||||||
|
- deepseek/deepseek-reasoner
|
||||||
|
- eu.anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
- eu.anthropic.claude-3-5-sonnet-20241022-v2:0
|
- eu.anthropic.claude-3-5-sonnet-20241022-v2:0
|
||||||
- mistral/codestral-2405
|
- mistral/codestral-2405
|
||||||
- mistral/codestral-latest
|
- mistral/codestral-latest
|
||||||
@@ -91,14 +99,18 @@ cog.out(model_list)
|
|||||||
- mistral/pixtral-large-2411
|
- mistral/pixtral-large-2411
|
||||||
- mistral/pixtral-large-latest
|
- mistral/pixtral-large-latest
|
||||||
- openrouter/anthropic/claude-3.5-sonnet
|
- openrouter/anthropic/claude-3.5-sonnet
|
||||||
|
- openrouter/anthropic/claude-3.7-sonnet
|
||||||
|
- openrouter/deepseek/deepseek-r1
|
||||||
- us.anthropic.claude-3-5-haiku-20241022-v1:0
|
- us.anthropic.claude-3-5-haiku-20241022-v1:0
|
||||||
- us.anthropic.claude-3-5-sonnet-20241022-v2:0
|
- us.anthropic.claude-3-5-sonnet-20241022-v2:0
|
||||||
|
- us.anthropic.claude-3-7-sonnet-20250219-v1:0
|
||||||
- vertex_ai/claude-3-5-haiku
|
- vertex_ai/claude-3-5-haiku
|
||||||
- vertex_ai/claude-3-5-haiku@20241022
|
- vertex_ai/claude-3-5-haiku@20241022
|
||||||
- vertex_ai/claude-3-5-sonnet
|
- vertex_ai/claude-3-5-sonnet
|
||||||
- vertex_ai/claude-3-5-sonnet-v2
|
- vertex_ai/claude-3-5-sonnet-v2
|
||||||
- vertex_ai/claude-3-5-sonnet-v2@20241022
|
- vertex_ai/claude-3-5-sonnet-v2@20241022
|
||||||
- vertex_ai/claude-3-5-sonnet@20240620
|
- vertex_ai/claude-3-5-sonnet@20240620
|
||||||
|
- vertex_ai/claude-3-7-sonnet@20250219
|
||||||
- vertex_ai/claude-3-haiku
|
- vertex_ai/claude-3-haiku
|
||||||
- vertex_ai/claude-3-haiku@20240307
|
- vertex_ai/claude-3-haiku@20240307
|
||||||
- vertex_ai/claude-3-opus
|
- vertex_ai/claude-3-opus
|
||||||
|
|||||||
@@ -24,6 +24,8 @@ In these cases, here are some things you might try.
|
|||||||
Many LLMs now have very large context windows,
|
Many LLMs now have very large context windows,
|
||||||
but filling them with irrelevant code or conversation
|
but filling them with irrelevant code or conversation
|
||||||
can confuse the model.
|
can confuse the model.
|
||||||
|
Above about 25k tokens of context, most models start to become distracted and become less likely
|
||||||
|
to conform to their system prompt.
|
||||||
|
|
||||||
- Don't add too many files to the chat, *just* add the files you think need to be edited.
|
- Don't add too many files to the chat, *just* add the files you think need to be edited.
|
||||||
Aider also sends the LLM a [map of your entire git repo](https://aider.chat/docs/repomap.html), so other relevant code will be included automatically.
|
Aider also sends the LLM a [map of your entire git repo](https://aider.chat/docs/repomap.html), so other relevant code will be included automatically.
|
||||||
@@ -33,8 +35,8 @@ Aider also sends the LLM a [map of your entire git repo](https://aider.chat/docs
|
|||||||
|
|
||||||
## Use a more capable model
|
## Use a more capable model
|
||||||
|
|
||||||
If possible try using GPT-4o, Claude 3.5 Sonnet or Claude 3 Opus,
|
If possible try using GPT-4o, o3-mini, Claude 3.7 Sonnet, DeepSeek V3 or DeepSeek R1.
|
||||||
as they are the strongest and most capable models.
|
They are the strong and capable models.
|
||||||
|
|
||||||
Weaker models
|
Weaker models
|
||||||
are more prone to
|
are more prone to
|
||||||
@@ -62,6 +64,12 @@ Aider v0.50.2-dev
|
|||||||
Models: claude-3-5-sonnet-20240620 with ♾️ diff edit format
|
Models: claude-3-5-sonnet-20240620 with ♾️ diff edit format
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Try architect mode
|
||||||
|
|
||||||
|
Run aider with `--architect` or `/chat-mode architect` to enable [architect mode](../usage/modes.md#architect-mode-and-the-editor-model).
|
||||||
|
This mode first proposes changes, then uses a separate model to handle the file edits.
|
||||||
|
This two-step process often produces more reliable edits, especially with models that have trouble
|
||||||
|
following edit format instructions.
|
||||||
|
|
||||||
## More help
|
## More help
|
||||||
|
|
||||||
|
|||||||
32
aider/website/docs/troubleshooting/models-and-keys.md
Normal file
32
aider/website/docs/troubleshooting/models-and-keys.md
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
---
|
||||||
|
parent: Troubleshooting
|
||||||
|
nav_order: 28
|
||||||
|
---
|
||||||
|
|
||||||
|
# Models and API keys
|
||||||
|
|
||||||
|
You need to tell aider which LLM to use and provide an API key.
|
||||||
|
The easiest way is to use the `--model` and `--api-key`
|
||||||
|
command line arguments, like this:
|
||||||
|
|
||||||
|
```
|
||||||
|
# Work with DeepSeek via DeepSeek's API
|
||||||
|
aider --model deepseek --api-key deepseek=your-key-goes-here
|
||||||
|
|
||||||
|
# Work with Claude 3.7 Sonnet via Anthropic's API
|
||||||
|
aider --model sonnet --api-key anthropic=your-key-goes-here
|
||||||
|
|
||||||
|
# Work with o3-mini via OpenAI's API
|
||||||
|
aider --model o3-mini --api-key openai=your-key-goes-here
|
||||||
|
|
||||||
|
# Work with Sonnet via OpenRouter's API
|
||||||
|
aider --model openrouter/anthropic/claude-3.7-sonnet --api-key openrouter=your-key-goes-here
|
||||||
|
|
||||||
|
# Work with DeepSeek Chat V3 via OpenRouter's API
|
||||||
|
aider --model openrouter/deepseek/deepseek-chat --api-key openrouter=your-key-goes-here
|
||||||
|
```
|
||||||
|
|
||||||
|
For more information, see the documentation sections:
|
||||||
|
|
||||||
|
- [Connecting to LLMs](https://aider.chat/docs/llms.html)
|
||||||
|
- [Configuring API keys](https://aider.chat/docs/config/api-keys.html)
|
||||||
@@ -29,7 +29,7 @@ Total tokens: 4864 of 16385
|
|||||||
To reduce output tokens:
|
To reduce output tokens:
|
||||||
- Ask for smaller changes in each request.
|
- Ask for smaller changes in each request.
|
||||||
- Break your code into smaller source files.
|
- Break your code into smaller source files.
|
||||||
- Try using a stronger model like gpt-4o or opus that can return diffs.
|
- Try using a stronger model like DeepSeek V3 or Sonnet that can return diffs.
|
||||||
|
|
||||||
For more info: https://aider.chat/docs/token-limits.html
|
For more info: https://aider.chat/docs/token-limits.html
|
||||||
```
|
```
|
||||||
@@ -47,7 +47,7 @@ overflowing its context window.
|
|||||||
Technically you can exhaust the context window if the input is
|
Technically you can exhaust the context window if the input is
|
||||||
too large or if the input plus output are too large.
|
too large or if the input plus output are too large.
|
||||||
|
|
||||||
Strong models like GPT-4o and Opus have quite
|
Strong models like GPT-4o and Sonnet have quite
|
||||||
large context windows, so this sort of error is
|
large context windows, so this sort of error is
|
||||||
typically only an issue when working with weaker models.
|
typically only an issue when working with weaker models.
|
||||||
|
|
||||||
@@ -73,7 +73,7 @@ To avoid hitting output token limits:
|
|||||||
|
|
||||||
- Ask for smaller changes in each request.
|
- Ask for smaller changes in each request.
|
||||||
- Break your code into smaller source files.
|
- Break your code into smaller source files.
|
||||||
- Use a strong model like gpt-4o, sonnet or opus that can return diffs.
|
- Use a strong model like gpt-4o, sonnet or DeepSeek V3 that can return diffs.
|
||||||
- Use a model that supports [infinite output](/docs/more/infinite-output.html).
|
- Use a model that supports [infinite output](/docs/more/infinite-output.html).
|
||||||
|
|
||||||
## Other causes
|
## Other causes
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user