mirror of
https://github.com/Aider-AI/aider
synced 2026-04-26 01:25:17 +02:00
Compare commits
1116 Commits
v0.57.1.de
...
v0.63.2.de
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
66f94d2141 | ||
|
|
503a9a0038 | ||
|
|
d8a5bc3ae9 | ||
|
|
94c3957d92 | ||
|
|
266350b8ce | ||
|
|
721d852cc7 | ||
|
|
ffbf205aba | ||
|
|
e1a1e43c3a | ||
|
|
c538817b61 | ||
|
|
71d85d2771 | ||
|
|
69f4d5fca7 | ||
|
|
bb31fc5301 | ||
|
|
87ce51e314 | ||
|
|
33555ca2ea | ||
|
|
6fc3776c0c | ||
|
|
8a05f05bd4 | ||
|
|
0dff51920e | ||
|
|
262f217d04 | ||
|
|
8b9154bab0 | ||
|
|
533613d92b | ||
|
|
20d87e1136 | ||
|
|
1450c4194e | ||
|
|
bc82baa968 | ||
|
|
218623be28 | ||
|
|
ea1a4ecdc6 | ||
|
|
6acbff3c11 | ||
|
|
153021efcf | ||
|
|
fef1b59b42 | ||
|
|
733b11b7d4 | ||
|
|
400514ff10 | ||
|
|
bbfac316fc | ||
|
|
7d0b67f70e | ||
|
|
d0f1b38848 | ||
|
|
4d4b5bc366 | ||
|
|
1f39c7ef0d | ||
|
|
2e97fcc47f | ||
|
|
550c8322c4 | ||
|
|
2467e23033 | ||
|
|
0e53198c59 | ||
|
|
44063590e2 | ||
|
|
557f25bf80 | ||
|
|
190531543f | ||
|
|
acfb7c3a89 | ||
|
|
94a6d3bc7e | ||
|
|
5751bcd382 | ||
|
|
21b88c0e65 | ||
|
|
f66b916d4b | ||
|
|
bd9c43a48d | ||
|
|
352b91f342 | ||
|
|
be6e3254ea | ||
|
|
dd1ee209ab | ||
|
|
c0b1101a52 | ||
|
|
52c49fc8fd | ||
|
|
77cb64958e | ||
|
|
b3d13e44b2 | ||
|
|
9dd7b795ca | ||
|
|
7a8399571a | ||
|
|
d0e85d9c2c | ||
|
|
14d02bc843 | ||
|
|
b79c09cf58 | ||
|
|
c9dfe5daff | ||
|
|
092e7f6b3c | ||
|
|
203634314c | ||
|
|
c472e6e160 | ||
|
|
86d9275375 | ||
|
|
9db6780af6 | ||
|
|
e10205ff69 | ||
|
|
9f539436b8 | ||
|
|
acd75e1083 | ||
|
|
f30225db90 | ||
|
|
007305962a | ||
|
|
8065e38797 | ||
|
|
34dc684867 | ||
|
|
7edc9603d0 | ||
|
|
479d476878 | ||
|
|
92bbec1852 | ||
|
|
d406636878 | ||
|
|
6362199363 | ||
|
|
22dbcb7590 | ||
|
|
cefea38ee5 | ||
|
|
d44cd01493 | ||
|
|
e578655653 | ||
|
|
434dc27557 | ||
|
|
79af39bd2c | ||
|
|
20d5a9fd4b | ||
|
|
80e57ca074 | ||
|
|
8a3c95d8dd | ||
|
|
4941a360cb | ||
|
|
9e7219c4d6 | ||
|
|
4d96728709 | ||
|
|
816fd5e65c | ||
|
|
8d4175536f | ||
|
|
bba9ca3d5a | ||
|
|
8bc9ebf2aa | ||
|
|
dad335b8b6 | ||
|
|
62e93d4002 | ||
|
|
728f4a0f81 | ||
|
|
0cafd6ee4b | ||
|
|
2962e51dac | ||
|
|
cf5733b237 | ||
|
|
c96e3326bc | ||
|
|
1cd373c0a5 | ||
|
|
ae970cf2da | ||
|
|
d43a01f182 | ||
|
|
42aac55b82 | ||
|
|
a66f31dc87 | ||
|
|
64c48f2151 | ||
|
|
9eead8a904 | ||
|
|
7761bccffe | ||
|
|
a848933875 | ||
|
|
e475f3d752 | ||
|
|
4d24dbc661 | ||
|
|
0368c3fae9 | ||
|
|
2cf93ccb54 | ||
|
|
3d72cafea4 | ||
|
|
af0466ea83 | ||
|
|
c84f2996ec | ||
|
|
add9b83d3b | ||
|
|
5411fb6fd4 | ||
|
|
49fc1b40e5 | ||
|
|
daef2eecdd | ||
|
|
1520422cc3 | ||
|
|
c7530085a6 | ||
|
|
bf43c567d8 | ||
|
|
b81f3e4f8d | ||
|
|
97051b9d40 | ||
|
|
90730845de | ||
|
|
f7c0c433c3 | ||
|
|
538752d0cf | ||
|
|
c71a92ac84 | ||
|
|
85f23b3408 | ||
|
|
44cab0a4d7 | ||
|
|
33db8ee0c3 | ||
|
|
da4b3770c0 | ||
|
|
12698998b9 | ||
|
|
6177856baf | ||
|
|
54b9c46b96 | ||
|
|
2dd83e7dbe | ||
|
|
17351e8f91 | ||
|
|
e8c153f72f | ||
|
|
ddfd1276c5 | ||
|
|
46251c6a1c | ||
|
|
e699968be5 | ||
|
|
389b58b75b | ||
|
|
a7cf34dea4 | ||
|
|
e601682706 | ||
|
|
55f16dc0b5 | ||
|
|
775011033f | ||
|
|
8ffe21a2dd | ||
|
|
73d63ef5ce | ||
|
|
e12b1a9184 | ||
|
|
d099a95b92 | ||
|
|
496ed90439 | ||
|
|
7883db1834 | ||
|
|
0cfc23b1a8 | ||
|
|
d33104aec1 | ||
|
|
711b2a431c | ||
|
|
09d21b5951 | ||
|
|
d9193387cc | ||
|
|
d5330ae2f3 | ||
|
|
571c1b47b5 | ||
|
|
0c9d4dd123 | ||
|
|
7d79408683 | ||
|
|
e6d4c3558b | ||
|
|
3be2109964 | ||
|
|
ce37ff26b5 | ||
|
|
03bbdb010f | ||
|
|
0bde1da42c | ||
|
|
987cb3bca9 | ||
|
|
0b11024967 | ||
|
|
dfaaedb466 | ||
|
|
8bc0d5544d | ||
|
|
c67e63bc09 | ||
|
|
129f5fae76 | ||
|
|
966a613ffe | ||
|
|
96ad107c19 | ||
|
|
ebdc126b00 | ||
|
|
5e1be966ed | ||
|
|
0022c1a67e | ||
|
|
6a0380b8c0 | ||
|
|
305b4fee87 | ||
|
|
9e18fc55d3 | ||
|
|
ebc9a30cc6 | ||
|
|
f53f5927df | ||
|
|
78a8d19ac5 | ||
|
|
bfa9dce1e0 | ||
|
|
3b2f1f1e66 | ||
|
|
b2bcca967f | ||
|
|
929aeb22bf | ||
|
|
66ad186edd | ||
|
|
c098354a67 | ||
|
|
d67d319b31 | ||
|
|
93639039d4 | ||
|
|
789535cb85 | ||
|
|
d8e1e823e7 | ||
|
|
5b9fe6fee0 | ||
|
|
91bc8cddf3 | ||
|
|
392162ae84 | ||
|
|
0c4e4a123a | ||
|
|
27711010a0 | ||
|
|
401967c27f | ||
|
|
d1b25b9a72 | ||
|
|
df478e1f8f | ||
|
|
a459f0e130 | ||
|
|
66b6788990 | ||
|
|
624f9122ab | ||
|
|
704350286c | ||
|
|
ec40ff5987 | ||
|
|
af7aaad903 | ||
|
|
3794cf941f | ||
|
|
72c1f25005 | ||
|
|
c13536e641 | ||
|
|
7a9988c060 | ||
|
|
b9939d4bd9 | ||
|
|
b71a3148cf | ||
|
|
42abdb0a41 | ||
|
|
4c39e92773 | ||
|
|
b7f7204c4e | ||
|
|
8f73c15f48 | ||
|
|
17330e53c3 | ||
|
|
3fcd79e165 | ||
|
|
750b12282f | ||
|
|
94c5ff2fd2 | ||
|
|
85ad2826da | ||
|
|
15c62e3e43 | ||
|
|
fea7134064 | ||
|
|
da706d9eb8 | ||
|
|
9a37c0491d | ||
|
|
1d3d6a589f | ||
|
|
c5a439e4e8 | ||
|
|
2b7584b651 | ||
|
|
1ef60b5a40 | ||
|
|
b79321da51 | ||
|
|
6829b29ef4 | ||
|
|
a9c0d20347 | ||
|
|
25a906950c | ||
|
|
740534dcff | ||
|
|
de65d86122 | ||
|
|
98bf9bd26d | ||
|
|
ade615c445 | ||
|
|
d959e1c60d | ||
|
|
8d2f66fe34 | ||
|
|
66dc8dbccd | ||
|
|
b228bcab8c | ||
|
|
ccbf1482c1 | ||
|
|
f9a0946472 | ||
|
|
faa80b7699 | ||
|
|
1eb2c724a5 | ||
|
|
c43e7f998e | ||
|
|
3e2454b84b | ||
|
|
2817766cf5 | ||
|
|
ba7656bc1c | ||
|
|
c14392a35a | ||
|
|
6bfb258473 | ||
|
|
1e872599fd | ||
|
|
c08b7f9c22 | ||
|
|
6e4ccf8715 | ||
|
|
e4913669d4 | ||
|
|
8e098752bd | ||
|
|
6867dab89c | ||
|
|
09a9fac91e | ||
|
|
2fd1681fab | ||
|
|
ece91dc724 | ||
|
|
01bf154151 | ||
|
|
b5916981b3 | ||
|
|
a68b87272b | ||
|
|
88b55df1c6 | ||
|
|
676c017eb1 | ||
|
|
139b8a2d4a | ||
|
|
24c68928d6 | ||
|
|
f957111141 | ||
|
|
e94e60b1d2 | ||
|
|
a565a63436 | ||
|
|
a045bda171 | ||
|
|
a3d78e0944 | ||
|
|
8082cbed98 | ||
|
|
aba8b5d00a | ||
|
|
3d5a4d9303 | ||
|
|
a899b0e27e | ||
|
|
267872b7e4 | ||
|
|
bd59a8debf | ||
|
|
c0e137889c | ||
|
|
5b84b901b2 | ||
|
|
059883abf7 | ||
|
|
0b622a6fd7 | ||
|
|
97989dd51a | ||
|
|
d81c421bfe | ||
|
|
bce586f510 | ||
|
|
a70b364842 | ||
|
|
920e8da57c | ||
|
|
71a8b286dc | ||
|
|
ee4decc50b | ||
|
|
63330aa833 | ||
|
|
15a0eb976f | ||
|
|
4f52ad385a | ||
|
|
cd1496f91b | ||
|
|
31babf39cd | ||
|
|
fe3dbba1d9 | ||
|
|
763724ed4e | ||
|
|
9e0e68caf9 | ||
|
|
52f697e513 | ||
|
|
aeca62bcf6 | ||
|
|
7e574bc214 | ||
|
|
e9beb1336c | ||
|
|
4ac8386313 | ||
|
|
f0233455d2 | ||
|
|
5c28dd039c | ||
|
|
26d1ab7a5f | ||
|
|
068fb38a5d | ||
|
|
53e7eba00b | ||
|
|
854d908fe0 | ||
|
|
ea3359fb4b | ||
|
|
55a2ba4bd6 | ||
|
|
20ca9c84c7 | ||
|
|
0d86124b15 | ||
|
|
554fa98c48 | ||
|
|
eb92fa4f88 | ||
|
|
0424e4b00a | ||
|
|
adca062081 | ||
|
|
ef0fcb8f38 | ||
|
|
0feed0047c | ||
|
|
7f027ff6e5 | ||
|
|
e25a46c892 | ||
|
|
68916b1186 | ||
|
|
e9771588e4 | ||
|
|
e1d55c82b1 | ||
|
|
5b6be29c1c | ||
|
|
28d9f6f8da | ||
|
|
3ccae4eff7 | ||
|
|
3ad240a10e | ||
|
|
87a31a583a | ||
|
|
7553104433 | ||
|
|
f9604633e6 | ||
|
|
eb9a73bdb0 | ||
|
|
d75e75190d | ||
|
|
94a49e601c | ||
|
|
3df3d86295 | ||
|
|
63f8979f2b | ||
|
|
ad94e49ef5 | ||
|
|
b40ff2a601 | ||
|
|
a4be01b474 | ||
|
|
c5dc44a73f | ||
|
|
358cbc9388 | ||
|
|
2e5fa9dea4 | ||
|
|
717592463e | ||
|
|
347a9f2a6d | ||
|
|
d288122fab | ||
|
|
d1cf3d4600 | ||
|
|
5188872791 | ||
|
|
a57a06e8c7 | ||
|
|
347ad34038 | ||
|
|
ccdd333ba3 | ||
|
|
56f3220d4c | ||
|
|
26a85c2047 | ||
|
|
f800ce1e5a | ||
|
|
d4103cc271 | ||
|
|
c3ba3b6f48 | ||
|
|
be54df4084 | ||
|
|
d06c6f8557 | ||
|
|
a876561ea0 | ||
|
|
6dc0b8d853 | ||
|
|
460311d49e | ||
|
|
94396070e8 | ||
|
|
4a3cb8dc95 | ||
|
|
f9005451fa | ||
|
|
16e292b1fd | ||
|
|
7d37793765 | ||
|
|
9e9c162a16 | ||
|
|
8d6db81a40 | ||
|
|
425bd8932b | ||
|
|
38820701be | ||
|
|
1bc74676ff | ||
|
|
0c37f002c9 | ||
|
|
18c41b6128 | ||
|
|
5876af4e94 | ||
|
|
e1d145013a | ||
|
|
143eeff4da | ||
|
|
3a7e4bac34 | ||
|
|
f5ca162576 | ||
|
|
dbefa6b010 | ||
|
|
79cdff6163 | ||
|
|
b111fc357c | ||
|
|
2af48e159c | ||
|
|
6cec44e402 | ||
|
|
fc6c01a9a5 | ||
|
|
01439875af | ||
|
|
fcdf998fac | ||
|
|
ed4ad45e3d | ||
|
|
907c1dbe2b | ||
|
|
3baad86afd | ||
|
|
0351924628 | ||
|
|
29250f82ed | ||
|
|
a819bf1d64 | ||
|
|
bc515cf74a | ||
|
|
f9c45432e6 | ||
|
|
54d55c857b | ||
|
|
8e2a4b47d6 | ||
|
|
513f06be46 | ||
|
|
0fb79917ff | ||
|
|
cd133f95ee | ||
|
|
3d66b53791 | ||
|
|
bf63e7045b | ||
|
|
e2dff0a74b | ||
|
|
bf60e58d5b | ||
|
|
e3a3a55dd5 | ||
|
|
1ed2eefff4 | ||
|
|
df65e87ef3 | ||
|
|
ddb876d2fe | ||
|
|
a080581329 | ||
|
|
f70a82791b | ||
|
|
5d3a60228c | ||
|
|
fd820dbfe2 | ||
|
|
b911ceaa61 | ||
|
|
9edf9b4186 | ||
|
|
be74259df6 | ||
|
|
29293cc8ac | ||
|
|
9e7995b730 | ||
|
|
ba9fdaa755 | ||
|
|
dde2dee304 | ||
|
|
55a99143c2 | ||
|
|
ba2bee61de | ||
|
|
5fd8fb15b9 | ||
|
|
8908a48b68 | ||
|
|
43fbda0faf | ||
|
|
97a9c65125 | ||
|
|
8a63b1b3ce | ||
|
|
3c658ac66f | ||
|
|
055d529947 | ||
|
|
9507f3c008 | ||
|
|
e0a1eefe0a | ||
|
|
ea37ba8346 | ||
|
|
8ea789c29d | ||
|
|
901e4397cb | ||
|
|
bcf781da73 | ||
|
|
a68178fd1b | ||
|
|
e18b8b0a29 | ||
|
|
62ff822ee8 | ||
|
|
c9993dccc8 | ||
|
|
6181fe5efa | ||
|
|
dd6124fa64 | ||
|
|
239302a219 | ||
|
|
e0949bff69 | ||
|
|
e0c603c764 | ||
|
|
8f62f99db7 | ||
|
|
b46de7127d | ||
|
|
e9627cb3c6 | ||
|
|
eb8fdf483e | ||
|
|
a6d5fa9cda | ||
|
|
1bef71f57a | ||
|
|
63c393e269 | ||
|
|
1efb0ba53e | ||
|
|
4a3e6ef1e5 | ||
|
|
a800d219c7 | ||
|
|
7f962b5bb0 | ||
|
|
57642cf96c | ||
|
|
c2e49be096 | ||
|
|
c26b2e4859 | ||
|
|
9ce0757835 | ||
|
|
c80a032297 | ||
|
|
bd28d8f3fb | ||
|
|
748fd0cf12 | ||
|
|
ac6f495377 | ||
|
|
3b14eb98a5 | ||
|
|
914bf3d82b | ||
|
|
949b50bd69 | ||
|
|
d234d5e3ec | ||
|
|
1b530f9200 | ||
|
|
3c4dad7eb4 | ||
|
|
cfcb6656cb | ||
|
|
ae54ef57ae | ||
|
|
cb306b61cd | ||
|
|
46269f2e9b | ||
|
|
7c9f9be7d8 | ||
|
|
ef45fe1015 | ||
|
|
eaaf05d964 | ||
|
|
3ce1cfb908 | ||
|
|
9e0a9e919b | ||
|
|
671f3078c1 | ||
|
|
67d538a499 | ||
|
|
b69d131810 | ||
|
|
949b7ece36 | ||
|
|
d1e35bcdd1 | ||
|
|
34ec197199 | ||
|
|
5a1cf67ea3 | ||
|
|
e427e2da05 | ||
|
|
2cfd4d719a | ||
|
|
709633de72 | ||
|
|
bf46f6dc8b | ||
|
|
259a18f3dd | ||
|
|
4c3aea4bfc | ||
|
|
6c595c261c | ||
|
|
ecaf1e674d | ||
|
|
e76704e261 | ||
|
|
e092081c56 | ||
|
|
4d6db0cb09 | ||
|
|
0729a725ad | ||
|
|
b7a884d81e | ||
|
|
90e6941de5 | ||
|
|
c7a05d75fd | ||
|
|
ce253fec8f | ||
|
|
6bb9b2567f | ||
|
|
26058c89fe | ||
|
|
692f6a7b85 | ||
|
|
163a29b026 | ||
|
|
06534fa91e | ||
|
|
bf45a14b30 | ||
|
|
0a497b7fd7 | ||
|
|
e1f85aa19e | ||
|
|
2f307e46e2 | ||
|
|
22203124e9 | ||
|
|
7e881927d6 | ||
|
|
a4e359daf4 | ||
|
|
ef2c165399 | ||
|
|
deb27f8f27 | ||
|
|
e51d8a403f | ||
|
|
27ddf8aeab | ||
|
|
b915b1834d | ||
|
|
cd3e0ae914 | ||
|
|
e820fea289 | ||
|
|
fc63faa737 | ||
|
|
65b50cdaee | ||
|
|
464c3e29e1 | ||
|
|
f65bf9aee7 | ||
|
|
80f7e56785 | ||
|
|
0cb05dd001 | ||
|
|
0fe5247d4c | ||
|
|
91a013afb3 | ||
|
|
3bc056a107 | ||
|
|
456c163cb9 | ||
|
|
f3ff24e35a | ||
|
|
1d2a3f573c | ||
|
|
cbdabd3ae9 | ||
|
|
c24ec7f230 | ||
|
|
3146d285bf | ||
|
|
bde93903de | ||
|
|
e487351c8d | ||
|
|
10ecb4b97f | ||
|
|
04a2cbb494 | ||
|
|
7729974253 | ||
|
|
27bfb3c662 | ||
|
|
3d5a7dce4f | ||
|
|
2d0de85976 | ||
|
|
ea4e58e509 | ||
|
|
48272449bb | ||
|
|
989e7c18fb | ||
|
|
c6d1c45abc | ||
|
|
57efba2763 | ||
|
|
e56df2391d | ||
|
|
28fab9548b | ||
|
|
7d117d05c5 | ||
|
|
a482ef9bfa | ||
|
|
f82ab9cc6e | ||
|
|
13141fc733 | ||
|
|
3c7e285182 | ||
|
|
30238dbbd5 | ||
|
|
96cdd95988 | ||
|
|
c10442087b | ||
|
|
06bd2ace69 | ||
|
|
3759340eb4 | ||
|
|
46e182004e | ||
|
|
e84de1fb5a | ||
|
|
3b9af69dec | ||
|
|
d098a4cd80 | ||
|
|
501a304d32 | ||
|
|
19b6e79719 | ||
|
|
d77ae7c2ac | ||
|
|
2b4206fd57 | ||
|
|
4ef7022343 | ||
|
|
1c67ddcbff | ||
|
|
516e637c39 | ||
|
|
d17bd6ebcc | ||
|
|
cde45d9fdf | ||
|
|
bfba7e8e73 | ||
|
|
f8243b3632 | ||
|
|
7ff2dc89b4 | ||
|
|
0c743ce8e8 | ||
|
|
50f88a8d28 | ||
|
|
ff230554ce | ||
|
|
af10953534 | ||
|
|
6093a4c2bb | ||
|
|
33b45b68e2 | ||
|
|
e33e9c44bb | ||
|
|
b35bd06eb8 | ||
|
|
875fb3a178 | ||
|
|
97d6b68613 | ||
|
|
21adf405a0 | ||
|
|
1a355cbf74 | ||
|
|
1a3e8b3375 | ||
|
|
da9af264f8 | ||
|
|
93cb615ffd | ||
|
|
7e55a8f684 | ||
|
|
433223a7fc | ||
|
|
1b58e1f6ad | ||
|
|
03faad3110 | ||
|
|
992600fcb7 | ||
|
|
99c3a1ec7f | ||
|
|
d4fe0b18ad | ||
|
|
8dfda27151 | ||
|
|
70a509bbea | ||
|
|
8c45e25724 | ||
|
|
f12682a0c1 | ||
|
|
79350ab195 | ||
|
|
ff0bacc984 | ||
|
|
d76f7a597e | ||
|
|
9ad915f40e | ||
|
|
e5b27355b7 | ||
|
|
510fa24ade | ||
|
|
72cb5db530 | ||
|
|
8b6b558483 | ||
|
|
90e42045df | ||
|
|
1ff9157c95 | ||
|
|
cf1e6e65c7 | ||
|
|
be5012db96 | ||
|
|
aec8550e56 | ||
|
|
1a4ef0f532 | ||
|
|
20ab8eb90a | ||
|
|
d621d16255 | ||
|
|
61bc130464 | ||
|
|
aa911a847d | ||
|
|
268331d5c0 | ||
|
|
8d4dd5c9de | ||
|
|
fb34af4362 | ||
|
|
e381260932 | ||
|
|
e2d5f15aff | ||
|
|
d4cecf9fd0 | ||
|
|
12578c0388 | ||
|
|
9b22d6248a | ||
|
|
729cad640f | ||
|
|
7a56be174e | ||
|
|
9789668ca4 | ||
|
|
9f7aada9c5 | ||
|
|
51f46393c3 | ||
|
|
e1029a3858 | ||
|
|
51ba096916 | ||
|
|
d9c159fe7a | ||
|
|
8ebd519615 | ||
|
|
bfcba012fa | ||
|
|
8f48c68fc6 | ||
|
|
f2e1e17741 | ||
|
|
8fb0362b47 | ||
|
|
a42587b8de | ||
|
|
474056a068 | ||
|
|
adde5e08d0 | ||
|
|
0aaa37f528 | ||
|
|
e5862ae81e | ||
|
|
485bfa2492 | ||
|
|
d5117921cf | ||
|
|
98b3f5bbdb | ||
|
|
b73c518eb2 | ||
|
|
457b627b67 | ||
|
|
53ca83beea | ||
|
|
53a83734ca | ||
|
|
2bdd9360ab | ||
|
|
805e46a1df | ||
|
|
291ef5711d | ||
|
|
00605e67fb | ||
|
|
6c6c6076eb | ||
|
|
b8ec4a5291 | ||
|
|
387a9434f8 | ||
|
|
cc01313500 | ||
|
|
0091ab004c | ||
|
|
d647ebf058 | ||
|
|
c2b48b5256 | ||
|
|
d5b8a15410 | ||
|
|
2139de76fb | ||
|
|
e832ee8450 | ||
|
|
37236aa907 | ||
|
|
4ea68efd0e | ||
|
|
843cc9ee4e | ||
|
|
ec10ead0c3 | ||
|
|
b78d19c7a5 | ||
|
|
97ddcb2ae6 | ||
|
|
d1ee3644ec | ||
|
|
b9fb59dc3e | ||
|
|
ad0497dfdf | ||
|
|
c1642b5eca | ||
|
|
f8746feaa1 | ||
|
|
40588ccffa | ||
|
|
9884d838ea | ||
|
|
384391129d | ||
|
|
0819258e0f | ||
|
|
1e776863ac | ||
|
|
202a219d82 | ||
|
|
817530ccb7 | ||
|
|
ffd7364410 | ||
|
|
bebffee3dd | ||
|
|
8cb3e523a5 | ||
|
|
228ae24834 | ||
|
|
ffc7d35145 | ||
|
|
010f921c53 | ||
|
|
d7d87847a6 | ||
|
|
91307ecfc4 | ||
|
|
1a75b79c81 | ||
|
|
3ff5f280bb | ||
|
|
e465cda5c8 | ||
|
|
2d5db0dc23 | ||
|
|
537f5a1f98 | ||
|
|
ede3cbb372 | ||
|
|
d3908ca971 | ||
|
|
725b5f7063 | ||
|
|
fd9dc9e8d2 | ||
|
|
c5fb25eec3 | ||
|
|
d2e0d3c06d | ||
|
|
1cd5b11b5c | ||
|
|
b16050211c | ||
|
|
0a558682d8 | ||
|
|
bff1b3de9c | ||
|
|
cf46d9cdd0 | ||
|
|
2c084d65d9 | ||
|
|
41d5b284e3 | ||
|
|
45d94b1c7c | ||
|
|
2509f0704f | ||
|
|
1f56118c99 | ||
|
|
38d03bca09 | ||
|
|
731f312731 | ||
|
|
635a800032 | ||
|
|
87da3a7132 | ||
|
|
eef97e4938 | ||
|
|
36bc8f90db | ||
|
|
c4e766a4c4 | ||
|
|
8bb9bbef19 | ||
|
|
2a79723336 | ||
|
|
542afbeb74 | ||
|
|
9644546a65 | ||
|
|
79e0b80f34 | ||
|
|
7e4e6782d1 | ||
|
|
aab01086a2 | ||
|
|
f7818c6994 | ||
|
|
22c60cad1a | ||
|
|
237ee1c5a1 | ||
|
|
87e4010b8b | ||
|
|
7830dadccf | ||
|
|
6287bf37e5 | ||
|
|
369310a7c3 | ||
|
|
7e1497e114 | ||
|
|
57257b7c15 | ||
|
|
68c4817c14 | ||
|
|
c35a41466b | ||
|
|
bb740f3004 | ||
|
|
c745d0dc38 | ||
|
|
9c5ef0b41a | ||
|
|
2d3605156e | ||
|
|
8e276939a7 | ||
|
|
c2afdcfdb9 | ||
|
|
8e02cadfbc | ||
|
|
0a77b6cfac | ||
|
|
f8390a889b | ||
|
|
b930a1db40 | ||
|
|
ee4de6bd1c | ||
|
|
6c2c3942bf | ||
|
|
3ec0861727 | ||
|
|
37b512e4fc | ||
|
|
01437fa58c | ||
|
|
927d03cc37 | ||
|
|
114fb2a889 | ||
|
|
f95c4626cf | ||
|
|
11db5d95a0 | ||
|
|
7d79dd00af | ||
|
|
0c470662bb | ||
|
|
7c1318274e | ||
|
|
a766395651 | ||
|
|
2dfc47f5c6 | ||
|
|
48bd616092 | ||
|
|
810aeccf94 | ||
|
|
c24e947b18 | ||
|
|
74f615bbb4 | ||
|
|
eb0331baed | ||
|
|
c3e44b498d | ||
|
|
58c856505b | ||
|
|
932aaea845 | ||
|
|
d50d97ab26 | ||
|
|
3c8f15f924 | ||
|
|
c2c4dbd2a8 | ||
|
|
04d6fc5ef4 | ||
|
|
b2549a78bd | ||
|
|
3b9f561956 | ||
|
|
6ab1fdfe36 | ||
|
|
8b0314ae77 | ||
|
|
cb1f6f2e3a | ||
|
|
a230fa10b3 | ||
|
|
1971285345 | ||
|
|
e1c2dd53cf | ||
|
|
cfbf943eb1 | ||
|
|
57b832fed1 | ||
|
|
da3e0f6ec8 | ||
|
|
9575f4e5ea | ||
|
|
6e0f981689 | ||
|
|
04978a232b | ||
|
|
44b61fc370 | ||
|
|
0bd8058589 | ||
|
|
ad38339f74 | ||
|
|
2f951cde0a | ||
|
|
84eca42ca8 | ||
|
|
1df67b0650 | ||
|
|
8eed1b6ca5 | ||
|
|
0d55a51033 | ||
|
|
31e33ec8ef | ||
|
|
b16027e500 | ||
|
|
1acb4da8d0 | ||
|
|
f38ea0c8e2 | ||
|
|
9d55197cd8 | ||
|
|
667a58052e | ||
|
|
e3e0d57512 | ||
|
|
c742642df2 | ||
|
|
c063bdb039 | ||
|
|
6230060d6a | ||
|
|
2b03b58018 | ||
|
|
f0c55a949c | ||
|
|
d9e5d66957 | ||
|
|
5ac71d8b6d | ||
|
|
1099cda9e4 | ||
|
|
d1dd640577 | ||
|
|
9bc7c8f3b0 | ||
|
|
7027b7b724 | ||
|
|
87c6c636d1 | ||
|
|
0ab5b40f52 | ||
|
|
6f8fe44fd6 | ||
|
|
7c8f0ea7b6 | ||
|
|
6c946006e8 | ||
|
|
edab00064d | ||
|
|
d6cf8377f1 | ||
|
|
eb21cf2830 | ||
|
|
17108c4328 | ||
|
|
0e9f384732 | ||
|
|
162afd0cdd | ||
|
|
b551e29de3 | ||
|
|
51bf028860 | ||
|
|
fadd362dbd | ||
|
|
7a12ca90d4 | ||
|
|
0b6c3f1c28 | ||
|
|
6d5575c705 | ||
|
|
857cc787a5 | ||
|
|
6b85e29ea7 | ||
|
|
bb50e1a959 | ||
|
|
bb8b0e32f2 | ||
|
|
74a5ef51d4 | ||
|
|
f5cdaa06c8 | ||
|
|
7fe3518c8c | ||
|
|
e2d5545b48 | ||
|
|
862968ed20 | ||
|
|
50fb95c8b1 | ||
|
|
6b6f682a94 | ||
|
|
5e419ef8a6 | ||
|
|
c3b9d34e24 | ||
|
|
5a78e7d1b8 | ||
|
|
1c05192b69 | ||
|
|
068ff01cee | ||
|
|
7feaccd35a | ||
|
|
bf015e27d6 | ||
|
|
3fe3e0f700 | ||
|
|
b3e3a5a401 | ||
|
|
d375103b64 | ||
|
|
b7ddd74631 | ||
|
|
4b6dd92f73 | ||
|
|
aa7949467c | ||
|
|
0cb8c891d9 | ||
|
|
1d454c1729 | ||
|
|
b867c04354 | ||
|
|
b4bd5ffb69 | ||
|
|
337b5f1f7a | ||
|
|
2d5962ef2f | ||
|
|
4f32da2bf0 | ||
|
|
5378171ed1 | ||
|
|
975f35dfbc | ||
|
|
a4df572cfe | ||
|
|
e83c285d4f | ||
|
|
10eb25f5f5 | ||
|
|
04cc9d03e9 | ||
|
|
c7e2255570 | ||
|
|
d699f3025d | ||
|
|
c33dd0ee7b | ||
|
|
63a4ce0fe2 | ||
|
|
222b9cff09 | ||
|
|
89aa385613 | ||
|
|
ab8672574a | ||
|
|
034e65912e | ||
|
|
40bf3a1e58 | ||
|
|
d2fbc92507 | ||
|
|
fb9ffca9ea | ||
|
|
fbe0aa2a7b | ||
|
|
3661c2c08a | ||
|
|
8d22cadd30 | ||
|
|
f691878390 | ||
|
|
50570c4762 | ||
|
|
8ca92cdec0 | ||
|
|
bc1559fbc9 | ||
|
|
ac9d46c1f7 | ||
|
|
fb2e59cf08 | ||
|
|
e93744c990 | ||
|
|
2698d80c63 | ||
|
|
4d7d7ecd2b | ||
|
|
46f8b6dfe4 | ||
|
|
1676653ffb | ||
|
|
98e28642f9 | ||
|
|
f21c35c42a | ||
|
|
3602dc4819 | ||
|
|
6ccb8e3a13 | ||
|
|
e682eb8669 | ||
|
|
9f3cd92b18 | ||
|
|
25e833bbd4 | ||
|
|
3f682ed908 | ||
|
|
6568cd71d4 | ||
|
|
c18d6a8960 | ||
|
|
2ff5e4969e | ||
|
|
b3ae2c878f | ||
|
|
d1d043d924 | ||
|
|
839ebe5a7c | ||
|
|
ed7503dbbe | ||
|
|
e21cdafb15 | ||
|
|
8d90df1ebc | ||
|
|
e6c938c489 | ||
|
|
24c959af2d | ||
|
|
15cc709322 | ||
|
|
856d617610 | ||
|
|
24c15db8d7 | ||
|
|
0ded63cd31 | ||
|
|
f0d02f2c76 | ||
|
|
f8cef655f0 | ||
|
|
0a7e5f313f | ||
|
|
888d60d472 | ||
|
|
6d19abf7ec | ||
|
|
e718b43fc8 | ||
|
|
a4aa88a440 | ||
|
|
e74e76e51c | ||
|
|
926b3c9240 | ||
|
|
11cdc4175f | ||
|
|
df8f57b628 | ||
|
|
7bd1d49f23 | ||
|
|
130c6f7308 | ||
|
|
26745426ae | ||
|
|
087dbb40a3 | ||
|
|
095bae6513 | ||
|
|
95c70951fd | ||
|
|
58c27e401e | ||
|
|
f2d60528f7 | ||
|
|
65e57df7ea | ||
|
|
075bc828f6 | ||
|
|
c912982747 | ||
|
|
a9e9f9cdbe | ||
|
|
75e1d519da | ||
|
|
3c87e3670c | ||
|
|
ee3bf2311c | ||
|
|
6c570b1847 | ||
|
|
bbfafed372 | ||
|
|
0adb7e0fd3 | ||
|
|
486c5ab224 | ||
|
|
99bf23c5ff | ||
|
|
82ebb7713a | ||
|
|
117ff96c76 | ||
|
|
46ab701782 | ||
|
|
ba6ef29896 | ||
|
|
eff68a669d | ||
|
|
d8dd7a259b | ||
|
|
7569dea73c | ||
|
|
3edcd71a44 | ||
|
|
ab786279e6 | ||
|
|
e267dc13af | ||
|
|
d0e6dc2c1e | ||
|
|
da752bb00c | ||
|
|
6b5fe9bee3 | ||
|
|
8422441f74 | ||
|
|
86faaa65ff | ||
|
|
543437dd28 | ||
|
|
3e594877d7 | ||
|
|
c2a35fef05 | ||
|
|
bbeb4749cb | ||
|
|
00662aef54 | ||
|
|
7a0196e039 | ||
|
|
ee6cbddf65 | ||
|
|
7016ccc150 | ||
|
|
a4b79127b0 | ||
|
|
ed1eb38c5f | ||
|
|
8f583ca119 | ||
|
|
1fe2be4633 | ||
|
|
d063be23c9 | ||
|
|
25c3a959dd | ||
|
|
924eeb43de | ||
|
|
d518493bb9 | ||
|
|
ba0a328196 | ||
|
|
1de8c13974 | ||
|
|
f2397bb0cc | ||
|
|
5779006db0 | ||
|
|
39b2f7bdee | ||
|
|
5e9a4e01f9 | ||
|
|
9c47f26052 | ||
|
|
b03d5d1099 | ||
|
|
c9ac01f474 | ||
|
|
c5245d3d8b | ||
|
|
a4044f4175 | ||
|
|
6dc846d41b | ||
|
|
d0bce02c00 | ||
|
|
7a5947fc49 | ||
|
|
a695c6c46e | ||
|
|
ca4141564f | ||
|
|
6cc0f96e33 | ||
|
|
a1bd0c97ee | ||
|
|
6a733f8e76 | ||
|
|
60082d0d16 | ||
|
|
ea72ad61fe | ||
|
|
a4f608f3dd | ||
|
|
737ccdec11 | ||
|
|
129afd0396 | ||
|
|
62f4dc2097 | ||
|
|
ef75ba9495 | ||
|
|
cee0bb7135 | ||
|
|
54cfbc4142 | ||
|
|
330fa863c8 | ||
|
|
1cc30a22f9 | ||
|
|
a91d3fed01 | ||
|
|
edf8fc6327 | ||
|
|
12f1bf643a | ||
|
|
2314179b83 | ||
|
|
516a3a6647 | ||
|
|
07a95deaba | ||
|
|
212e22b2b7 | ||
|
|
3a96a10d06 | ||
|
|
3dfc63ce79 | ||
|
|
a77c8ccfa9 | ||
|
|
eff9325f2b | ||
|
|
0101ae76d1 | ||
|
|
26f9a10324 | ||
|
|
6731815251 | ||
|
|
b4fdb72a3a | ||
|
|
f3ad683d70 | ||
|
|
bd398525e0 | ||
|
|
7f156830fe | ||
|
|
739bd07c04 | ||
|
|
dced6c8052 | ||
|
|
5c7f35f24b | ||
|
|
7abfd3236b | ||
|
|
aca60a9127 | ||
|
|
8cc747da3a | ||
|
|
26dcdcc1d2 | ||
|
|
5606791df2 | ||
|
|
de87418647 | ||
|
|
10fee78ddd | ||
|
|
237002f941 | ||
|
|
5d80e11e5b | ||
|
|
c234df0ff1 | ||
|
|
ced3336176 | ||
|
|
925560ac1f | ||
|
|
4a9700fcd9 | ||
|
|
dca9b18871 | ||
|
|
baddc0b63c | ||
|
|
8cb83afcc4 | ||
|
|
83662b7470 | ||
|
|
5408dcb185 | ||
|
|
39ae106bb3 | ||
|
|
abd484bfa7 | ||
|
|
cc15909629 | ||
|
|
5b584db90c | ||
|
|
05dcbeecac | ||
|
|
ff3a75413b | ||
|
|
0aad7b46f6 | ||
|
|
62d8190733 | ||
|
|
4759297b67 | ||
|
|
607a9a8c86 | ||
|
|
96b67ab26c | ||
|
|
93b8cb9cec | ||
|
|
7b6ad16fdb | ||
|
|
b722572a28 | ||
|
|
46489f1a46 | ||
|
|
d7a29c42b7 | ||
|
|
1ebe5f8bd5 | ||
|
|
26fe63b7ac | ||
|
|
d2435c2e00 | ||
|
|
4d04a35e73 | ||
|
|
cabad84521 | ||
|
|
aeadf2f139 | ||
|
|
4ebbfa01b7 | ||
|
|
960b456e70 | ||
|
|
329514289d | ||
|
|
03f14dfe47 | ||
|
|
1c321df457 | ||
|
|
8a21eb7804 | ||
|
|
c1d4adbebf | ||
|
|
ef933c207b | ||
|
|
08aaae4d36 | ||
|
|
f52265362f | ||
|
|
00aa43d964 | ||
|
|
84489f16b5 | ||
|
|
e07194bbeb | ||
|
|
1567d3e3d1 | ||
|
|
eca7a57138 | ||
|
|
64df0ad590 | ||
|
|
5a28d499a8 | ||
|
|
4bad876f66 | ||
|
|
1a8763d989 | ||
|
|
13eaf5e5ce | ||
|
|
82250db8af | ||
|
|
85fa78f5a6 | ||
|
|
9ed732959e | ||
|
|
860a36fe6c | ||
|
|
01a9a8ffc4 | ||
|
|
aa840f0e28 | ||
|
|
a7a626423c | ||
|
|
c2c9b60ea6 | ||
|
|
a6282818db | ||
|
|
f563544761 | ||
|
|
48a344bc6d | ||
|
|
f110e8c8db | ||
|
|
65c0608d5c | ||
|
|
57ce0dca67 | ||
|
|
4129065d6c | ||
|
|
d59fd508c2 | ||
|
|
62a5cf8dee | ||
|
|
6ec4e60058 | ||
|
|
087b3d4ffb | ||
|
|
7d3585bafe | ||
|
|
1a49974f98 | ||
|
|
b11c17dbd4 | ||
|
|
474ac62391 | ||
|
|
6ee8a74d47 | ||
|
|
121dd908a6 | ||
|
|
2e1ac25ce2 | ||
|
|
b49ee06f23 |
22
.github/workflows/docker-build-test.yml
vendored
22
.github/workflows/docker-build-test.yml
vendored
@@ -27,22 +27,24 @@ jobs:
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
- name: Login to DockerHub
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_PASSWORD }}
|
||||
env:
|
||||
dockerhub_username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
dockerhub_password: ${{ secrets.DOCKERHUB_PASSWORD }}
|
||||
if: ${{ env.dockerhub_username }} && ${{ env.dockerhub_password }}
|
||||
|
||||
- name: Build Docker image
|
||||
- name: Build Docker standard image
|
||||
uses: docker/build-push-action@v5
|
||||
with:
|
||||
context: .
|
||||
file: ./docker/Dockerfile
|
||||
platforms: linux/amd64,linux/arm64
|
||||
push: false
|
||||
target: aider
|
||||
|
||||
- name: Build Docker full image
|
||||
uses: docker/build-push-action@v5
|
||||
with:
|
||||
context: .
|
||||
file: ./docker/Dockerfile
|
||||
platforms: linux/amd64,linux/arm64
|
||||
push: false
|
||||
target: aider-full
|
||||
|
||||
6
.github/workflows/pages.yml
vendored
6
.github/workflows/pages.yml
vendored
@@ -70,15 +70,15 @@ jobs:
|
||||
id: deployment
|
||||
uses: actions/deploy-pages@v2
|
||||
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
- name: Set up Python 3.12
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
python-version: '3.12'
|
||||
|
||||
- name: Install linkchecker
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install linkchecker
|
||||
python -m pip install linkchecker
|
||||
|
||||
- name: Run linkchecker
|
||||
run: |
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -11,3 +11,4 @@ _site
|
||||
.jekyll-cache/
|
||||
.jekyll-metadata
|
||||
aider/__version__.py
|
||||
.venv/
|
||||
@@ -17,10 +17,10 @@ Contributions of
|
||||
[LLM benchmark results](https://aider.chat/docs/leaderboards/)
|
||||
are welcome!
|
||||
See the
|
||||
[benchmark README](https://github.com/paul-gauthier/aider/blob/main/benchmark/README.md)
|
||||
[benchmark README](https://github.com/Aider-AI/aider/blob/main/benchmark/README.md)
|
||||
for information on running aider's code editing benchmarks.
|
||||
Submit results by opening a PR with edits to the
|
||||
[benchmark results data files](https://github.com/paul-gauthier/aider/blob/main/aider/website/_data/).
|
||||
[benchmark results data files](https://github.com/Aider-AI/aider/blob/main/aider/website/_data/).
|
||||
|
||||
|
||||
## Pull Requests
|
||||
@@ -33,19 +33,16 @@ ensure that your contributions can be integrated smoothly.
|
||||
|
||||
## Licensing
|
||||
|
||||
By contributing to this project, you agree that your contributions
|
||||
will be licensed under the Apache License 2.0. Additionally, you
|
||||
understand and agree that contributions may be subject to a different
|
||||
license, should the project maintainers decide to change the licensing
|
||||
terms.
|
||||
|
||||
Before contributing a PR, please review our
|
||||
[Individual Contributor License Agreement](https://aider.chat/docs/legal/contributor-agreement.html).
|
||||
All contributors will be asked to complete the agreement as part of the PR process.
|
||||
|
||||
## Setting up a Development Environment
|
||||
|
||||
### Clone the Repository
|
||||
|
||||
```
|
||||
git clone https://github.com/paul-gauthier/aider.git
|
||||
git clone https://github.com/Aider-AI/aider.git
|
||||
cd aider
|
||||
```
|
||||
|
||||
@@ -154,6 +151,10 @@ The project's documentation is built using Jekyll and hosted on GitHub Pages. To
|
||||
```
|
||||
bundle exec jekyll build
|
||||
```
|
||||
5. Preview the website while editing (optional):
|
||||
```
|
||||
bundle exec jekyll serve
|
||||
```
|
||||
|
||||
The built documentation will be available in the `aider/website/_site` directory.
|
||||
|
||||
|
||||
119
HISTORY.md
119
HISTORY.md
@@ -1,6 +1,123 @@
|
||||
|
||||
# Release history
|
||||
|
||||
### Aider v0.63.0
|
||||
|
||||
- Support for Qwen 2.5 Coder 32B.
|
||||
- `/web` command just adds the page to the chat, without triggering an LLM response.
|
||||
- Improved prompting for the user's preferred chat language.
|
||||
- Improved handling of LiteLLM exceptions.
|
||||
- Bugfix for double-counting tokens when reporting cache stats.
|
||||
- Bugfix for the LLM creating new files.
|
||||
- Other small bug fixes.
|
||||
- Aider wrote 55% of the code in this release.
|
||||
|
||||
### Aider v0.62.0
|
||||
|
||||
- Full support for Claude 3.5 Haiku
|
||||
- Scored 75% on [aider's code editing leaderboard](https://aider.chat/docs/leaderboards/).
|
||||
- Almost as good as Sonnet at much lower cost.
|
||||
- Launch with `--haiku` to use it.
|
||||
- Easily apply file edits from ChatGPT, Claude or other web apps
|
||||
- Chat with ChatGPT or Claude via their web app.
|
||||
- Give it your source files and ask for the changes you want.
|
||||
- Use the web app's "copy response" button to copy the entire reply from the LLM.
|
||||
- Run `aider --apply-clipboard-edits file-to-edit.js`.
|
||||
- Aider will edit your file with the LLM's changes.
|
||||
- Bugfix for creating new files.
|
||||
- Aider wrote 84% of the code in this release.
|
||||
|
||||
### Aider v0.61.0
|
||||
|
||||
- Load and save aider slash-commands to files:
|
||||
- `/save <fname>` command will make a file of `/add` and `/read-only` commands that recreate the current file context in the chat.
|
||||
- `/load <fname>` will replay the commands in the file.
|
||||
- You can use `/load` to run any arbitrary set of slash-commands, not just `/add` and `/read-only`.
|
||||
- Use `--load <fname>` to run a list of commands on launch, before the interactive chat begins.
|
||||
- Anonymous, opt-in [analytics](https://aider.chat/docs/more/analytics.html) with no personal data sharing.
|
||||
- Aider follows litellm's `supports_vision` attribute to enable image support for models.
|
||||
- Bugfix for when diff mode flexibly handles the model using the wrong filename.
|
||||
- Displays filenames in sorted order for `/add` and `/read-only`.
|
||||
- New `--no-fancy-input` switch disables prompt toolkit input, now still available with `--no-pretty`.
|
||||
- Override browser config with `--no-browser` or `--no-gui`.
|
||||
- Offer to open documentation URLs when errors occur.
|
||||
- Properly support all o1 models, regardless of provider.
|
||||
- Improved layout of filenames above input prompt.
|
||||
- Better handle corrupted repomap tags cache.
|
||||
- Improved handling of API errors, especially when accessing the weak model.
|
||||
- Aider wrote 68% of the code in this release.
|
||||
|
||||
### Aider v0.60.1
|
||||
|
||||
- Enable image support for Sonnet 10/22.
|
||||
- Display filenames in sorted order.
|
||||
|
||||
### Aider v0.60.0
|
||||
|
||||
- Full support for Sonnet 10/22, the new SOTA model on aider's code editing benchmark.
|
||||
- Aider uses Sonnet 10/22 by default.
|
||||
- Improved formatting of added and read-only files above chat prompt, by @jbellis.
|
||||
- Improved support for o1 models by more flexibly parsing their nonconforming code edit replies.
|
||||
- Corrected diff edit format prompt that only the first match is replaced.
|
||||
- Stronger whole edit format prompt asking for clean file names.
|
||||
- Now offers to add `.env` to the `.gitignore` file.
|
||||
- Ships with a small model metadata json file to handle models not yet updated in litellm.
|
||||
- Model settings for o1 models on azure.
|
||||
- Bugfix to properly include URLs in `/help` RAG results.
|
||||
- Aider wrote 49% of the code in this release.
|
||||
|
||||
### Aider v0.59.1
|
||||
|
||||
- Check for obsolete `yes: true` in yaml config, show helpful error.
|
||||
- Model settings for openrouter/anthropic/claude-3.5-sonnet:beta
|
||||
|
||||
### Aider v0.59.0
|
||||
|
||||
- Improvements to `/read-only`:
|
||||
- Now supports shell-style auto-complete of the full file system.
|
||||
- Still auto-completes the full paths of the repo files like `/add`.
|
||||
- Now supports globs like `src/**/*.py`
|
||||
- Renamed `--yes` to `--yes-always`.
|
||||
- Now uses `AIDER_YES_ALWAYS` env var and `yes-always:` yaml key.
|
||||
- Existing YAML and .env files will need to be updated.
|
||||
- Can still abbreviate to `--yes` on the command line.
|
||||
- Config file now uses standard YAML list syntax with ` - list entries`, one per line.
|
||||
- `/settings` now includes the same announcement lines that would print at launch.
|
||||
- Sanity checks the `--editor-model` on launch now, same as main and weak models.
|
||||
- Added `--skip-sanity-check-repo` switch to speedup launch in large repos.
|
||||
- Bugfix so architect mode handles Control-C properly.
|
||||
- Repo-map is deterministic now, with improved caching logic.
|
||||
- Improved commit message prompt.
|
||||
- Aider wrote 77% of the code in this release.
|
||||
|
||||
### Aider v0.58.1
|
||||
|
||||
- Fixed bug where cache warming pings caused subsequent user messages to trigger a tight loop of LLM requests.
|
||||
|
||||
### Aider v0.58.0
|
||||
|
||||
- [Use a pair of Architect/Editor models for improved coding](https://aider.chat/2024/09/26/architect.html)
|
||||
- Use a strong reasoning model like o1-preview as your Architect.
|
||||
- Use a cheaper, faster model like gpt-4o as your Editor.
|
||||
- New `--o1-preview` and `--o1-mini` shortcuts.
|
||||
- Support for new Gemini 002 models.
|
||||
- Better support for Qwen 2.5 models.
|
||||
- Many confirmation questions can be skipped for the rest of the session with "(D)on't ask again" response.
|
||||
- Autocomplete for `/read-only` supports the entire filesystem.
|
||||
- New settings for completion menu colors.
|
||||
- New `/copy` command to copy the last LLM response to the clipboard.
|
||||
- Renamed `/clipboard` to `/paste`.
|
||||
- Will now follow HTTP redirects when scraping urls.
|
||||
- New `--voice-format` switch to send voice audio as wav/mp3/webm, by @mbailey.
|
||||
- ModelSettings takes `extra_params` dict to specify any extras to pass to `litellm.completion()`.
|
||||
- Support for cursor shapes when in vim mode.
|
||||
- Numerous bug fixes.
|
||||
- Aider wrote 53% of the code in this release.
|
||||
|
||||
### Aider v0.57.1
|
||||
|
||||
- Fixed dependency conflict between aider-chat[help] and [playwright].
|
||||
|
||||
### Aider v0.57.0
|
||||
|
||||
- Support for OpenAI o1 models:
|
||||
@@ -661,7 +778,7 @@
|
||||
### Aider v0.14.0
|
||||
|
||||
- [Support for Claude2 and other LLMs via OpenRouter](https://aider.chat/docs/faq.html#accessing-other-llms-with-openrouter) by @joshuavial
|
||||
- Documentation for [running the aider benchmarking suite](https://github.com/paul-gauthier/aider/tree/main/benchmark)
|
||||
- Documentation for [running the aider benchmarking suite](https://github.com/Aider-AI/aider/tree/main/benchmark)
|
||||
- Aider now requires Python >= 3.9
|
||||
|
||||
|
||||
|
||||
14
README.md
14
README.md
@@ -46,7 +46,7 @@ cog.out(open("aider/website/_includes/get-started.md").read())
|
||||
You can get started quickly like this:
|
||||
|
||||
```
|
||||
python -m pip install aider-chat
|
||||
python -m pip install -U aider-chat
|
||||
|
||||
# Change directory into a git repo
|
||||
cd /to/your/git/repo
|
||||
@@ -107,7 +107,7 @@ projects like django, scikitlearn, matplotlib, etc.
|
||||
- [Configuration](https://aider.chat/docs/config.html)
|
||||
- [Troubleshooting](https://aider.chat/docs/troubleshooting.html)
|
||||
- [LLM Leaderboards](https://aider.chat/docs/leaderboards/)
|
||||
- [GitHub](https://github.com/paul-gauthier/aider)
|
||||
- [GitHub](https://github.com/Aider-AI/aider)
|
||||
- [Discord](https://discord.gg/Tv2uQnR88V)
|
||||
- [Blog](https://aider.chat/blog/)
|
||||
|
||||
@@ -118,14 +118,14 @@ projects like django, scikitlearn, matplotlib, etc.
|
||||
- *The best AI coding assistant so far.* -- [Matthew Berman](https://www.youtube.com/watch?v=df8afeb1FY8)
|
||||
- *Aider ... has easily quadrupled my coding productivity.* -- [SOLAR_FIELDS](https://news.ycombinator.com/item?id=36212100)
|
||||
- *It's a cool workflow... Aider's ergonomics are perfect for me.* -- [qup](https://news.ycombinator.com/item?id=38185326)
|
||||
- *It's really like having your senior developer live right in your Git repo - truly amazing!* -- [rappster](https://github.com/paul-gauthier/aider/issues/124)
|
||||
- *What an amazing tool. It's incredible.* -- [valyagolev](https://github.com/paul-gauthier/aider/issues/6#issue-1722897858)
|
||||
- *Aider is such an astounding thing!* -- [cgrothaus](https://github.com/paul-gauthier/aider/issues/82#issuecomment-1631876700)
|
||||
- *It's really like having your senior developer live right in your Git repo - truly amazing!* -- [rappster](https://github.com/Aider-AI/aider/issues/124)
|
||||
- *What an amazing tool. It's incredible.* -- [valyagolev](https://github.com/Aider-AI/aider/issues/6#issue-1722897858)
|
||||
- *Aider is such an astounding thing!* -- [cgrothaus](https://github.com/Aider-AI/aider/issues/82#issuecomment-1631876700)
|
||||
- *It was WAY faster than I would be getting off the ground and making the first few working versions.* -- [Daniel Feldman](https://twitter.com/d_feldman/status/1662295077387923456)
|
||||
- *THANK YOU for Aider! It really feels like a glimpse into the future of coding.* -- [derwiki](https://news.ycombinator.com/item?id=38205643)
|
||||
- *It's just amazing. It is freeing me to do things I felt were out my comfort zone before.* -- [Dougie](https://discord.com/channels/1131200896827654144/1174002618058678323/1174084556257775656)
|
||||
- *This project is stellar.* -- [funkytaco](https://github.com/paul-gauthier/aider/issues/112#issuecomment-1637429008)
|
||||
- *Amazing project, definitely the best AI coding assistant I've used.* -- [joshuavial](https://github.com/paul-gauthier/aider/issues/84)
|
||||
- *This project is stellar.* -- [funkytaco](https://github.com/Aider-AI/aider/issues/112#issuecomment-1637429008)
|
||||
- *Amazing project, definitely the best AI coding assistant I've used.* -- [joshuavial](https://github.com/Aider-AI/aider/issues/84)
|
||||
- *I absolutely love using Aider ... It makes software development feel so much lighter as an experience.* -- [principalideal0](https://discord.com/channels/1131200896827654144/1133421607499595858/1229689636012691468)
|
||||
- *I have been recovering from multiple shoulder surgeries ... and have used aider extensively. It has allowed me to continue productivity.* -- [codeninja](https://www.reddit.com/r/OpenAI/s/nmNwkHy1zG)
|
||||
- *I am an aider addict. I'm getting so much more work done, but in less time.* -- [dandandan](https://discord.com/channels/1131200896827654144/1131200896827654149/1135913253483069470)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
try:
|
||||
from aider.__version__ import __version__
|
||||
except Exception:
|
||||
__version__ = "0.57.1.dev"
|
||||
__version__ = "0.63.2.dev"
|
||||
|
||||
__all__ = [__version__]
|
||||
|
||||
164
aider/analytics.py
Normal file
164
aider/analytics.py
Normal file
@@ -0,0 +1,164 @@
|
||||
import json
|
||||
import platform
|
||||
import sys
|
||||
import time
|
||||
import uuid
|
||||
from pathlib import Path
|
||||
|
||||
from mixpanel import Mixpanel
|
||||
from posthog import Posthog
|
||||
|
||||
from aider import __version__
|
||||
from aider.dump import dump # noqa: F401
|
||||
from aider.models import model_info_manager
|
||||
|
||||
mixpanel_project_token = "6da9a43058a5d1b9f3353153921fb04d"
|
||||
posthog_project_api_key = "phc_99T7muzafUMMZX15H8XePbMSreEUzahHbtWjy3l5Qbv"
|
||||
posthog_host = "https://us.i.posthog.com"
|
||||
|
||||
|
||||
class Analytics:
|
||||
# providers
|
||||
mp = None
|
||||
ph = None
|
||||
|
||||
# saved
|
||||
user_id = None
|
||||
permanently_disable = None
|
||||
asked_opt_in = None
|
||||
|
||||
# ephemeral
|
||||
logfile = None
|
||||
|
||||
def __init__(self, logfile=None, permanently_disable=False):
|
||||
self.logfile = logfile
|
||||
self.get_or_create_uuid()
|
||||
|
||||
if self.permanently_disable or permanently_disable or not self.asked_opt_in:
|
||||
self.disable(permanently_disable)
|
||||
|
||||
def enable(self):
|
||||
if not self.user_id:
|
||||
self.disable(False)
|
||||
return
|
||||
|
||||
if self.permanently_disable:
|
||||
self.disable(True)
|
||||
return
|
||||
|
||||
if not self.asked_opt_in:
|
||||
self.disable(False)
|
||||
return
|
||||
|
||||
self.mp = Mixpanel(mixpanel_project_token)
|
||||
self.ph = Posthog(project_api_key=posthog_project_api_key, host=posthog_host)
|
||||
|
||||
def disable(self, permanently):
|
||||
self.mp = None
|
||||
self.ph = None
|
||||
|
||||
if permanently:
|
||||
self.asked_opt_in = True
|
||||
self.permanently_disable = True
|
||||
self.save_data()
|
||||
|
||||
def need_to_ask(self):
|
||||
return not self.asked_opt_in and not self.permanently_disable
|
||||
|
||||
def get_data_file_path(self):
|
||||
data_file = Path.home() / ".aider" / "analytics.json"
|
||||
data_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
return data_file
|
||||
|
||||
def get_or_create_uuid(self):
|
||||
self.load_data()
|
||||
if self.user_id:
|
||||
return
|
||||
|
||||
self.user_id = str(uuid.uuid4())
|
||||
self.save_data()
|
||||
|
||||
def load_data(self):
|
||||
data_file = self.get_data_file_path()
|
||||
if data_file.exists():
|
||||
try:
|
||||
data = json.loads(data_file.read_text())
|
||||
self.permanently_disable = data.get("permanently_disable")
|
||||
self.user_id = data.get("uuid")
|
||||
self.asked_opt_in = data.get("asked_opt_in", False)
|
||||
except (json.decoder.JSONDecodeError, OSError):
|
||||
self.disable(permanently=False)
|
||||
|
||||
def save_data(self):
|
||||
data_file = self.get_data_file_path()
|
||||
data = dict(
|
||||
uuid=self.user_id,
|
||||
permanently_disable=self.permanently_disable,
|
||||
asked_opt_in=self.asked_opt_in,
|
||||
)
|
||||
|
||||
# Allow exceptions; crash if we can't record permanently_disabled=True, etc
|
||||
data_file.write_text(json.dumps(data, indent=4))
|
||||
|
||||
def get_system_info(self):
|
||||
return {
|
||||
"python_version": sys.version.split()[0],
|
||||
"os_platform": platform.system(),
|
||||
"os_release": platform.release(),
|
||||
"machine": platform.machine(),
|
||||
}
|
||||
|
||||
def _redact_model_name(self, model):
|
||||
if not model:
|
||||
return None
|
||||
|
||||
info = model_info_manager.get_model_from_cached_json_db(model.name)
|
||||
if info:
|
||||
return model.name
|
||||
elif "/" in model.name:
|
||||
return model.name.split("/")[0] + "/REDACTED"
|
||||
return None
|
||||
|
||||
def event(self, event_name, main_model=None, **kwargs):
|
||||
if not (self.mp or self.ph) and not self.logfile:
|
||||
return
|
||||
|
||||
properties = {}
|
||||
|
||||
if main_model:
|
||||
properties["main_model"] = self._redact_model_name(main_model)
|
||||
properties["weak_model"] = self._redact_model_name(main_model.weak_model)
|
||||
properties["editor_model"] = self._redact_model_name(main_model.editor_model)
|
||||
|
||||
properties.update(kwargs)
|
||||
properties.update(self.get_system_info()) # Add system info to all events
|
||||
|
||||
# Handle numeric values
|
||||
for key, value in properties.items():
|
||||
if isinstance(value, (int, float)):
|
||||
properties[key] = value
|
||||
else:
|
||||
properties[key] = str(value)
|
||||
|
||||
properties["aider_version"] = __version__
|
||||
|
||||
if self.mp:
|
||||
self.mp.track(self.user_id, event_name, dict(properties))
|
||||
|
||||
if self.ph:
|
||||
self.ph.capture(self.user_id, event_name, dict(properties))
|
||||
|
||||
if self.logfile:
|
||||
log_entry = {
|
||||
"event": event_name,
|
||||
"properties": properties,
|
||||
"user_id": self.user_id,
|
||||
"time": int(time.time()),
|
||||
}
|
||||
with open(self.logfile, "a") as f:
|
||||
json.dump(log_entry, f)
|
||||
f.write("\n")
|
||||
|
||||
def __del__(self):
|
||||
if self.ph:
|
||||
self.ph.shutdown()
|
||||
207
aider/args.py
207
aider/args.py
@@ -25,6 +25,7 @@ def get_parser(default_config_files, git_root):
|
||||
description="aider is AI pair programming in your terminal",
|
||||
add_config_file_help=True,
|
||||
default_config_files=default_config_files,
|
||||
config_file_parser_class=configargparse.YAMLConfigFileParser,
|
||||
auto_env_var_prefix="AIDER_",
|
||||
)
|
||||
group = parser.add_argument_group("Main")
|
||||
@@ -57,7 +58,7 @@ def get_parser(default_config_files, git_root):
|
||||
const=opus_model,
|
||||
help=f"Use {opus_model} model for the main chat",
|
||||
)
|
||||
sonnet_model = "claude-3-5-sonnet-20240620"
|
||||
sonnet_model = "claude-3-5-sonnet-20241022"
|
||||
group.add_argument(
|
||||
"--sonnet",
|
||||
action="store_const",
|
||||
@@ -65,6 +66,14 @@ def get_parser(default_config_files, git_root):
|
||||
const=sonnet_model,
|
||||
help=f"Use {sonnet_model} model for the main chat",
|
||||
)
|
||||
haiku_model = "claude-3-5-haiku-20241022"
|
||||
group.add_argument(
|
||||
"--haiku",
|
||||
action="store_const",
|
||||
dest="model",
|
||||
const=haiku_model,
|
||||
help=f"Use {haiku_model} model for the main chat",
|
||||
)
|
||||
gpt_4_model = "gpt-4-0613"
|
||||
group.add_argument(
|
||||
"--4",
|
||||
@@ -117,6 +126,22 @@ def get_parser(default_config_files, git_root):
|
||||
const=deepseek_model,
|
||||
help=f"Use {deepseek_model} model for the main chat",
|
||||
)
|
||||
o1_mini_model = "o1-mini"
|
||||
group.add_argument(
|
||||
"--o1-mini",
|
||||
action="store_const",
|
||||
dest="model",
|
||||
const=o1_mini_model,
|
||||
help=f"Use {o1_mini_model} model for the main chat",
|
||||
)
|
||||
o1_preview_model = "o1-preview"
|
||||
group.add_argument(
|
||||
"--o1-preview",
|
||||
action="store_const",
|
||||
dest="model",
|
||||
const=o1_preview_model,
|
||||
help=f"Use {o1_preview_model} model for the main chat",
|
||||
)
|
||||
|
||||
##########
|
||||
group = parser.add_argument_group("Model Settings")
|
||||
@@ -181,6 +206,13 @@ def get_parser(default_config_files, git_root):
|
||||
default=None,
|
||||
help="Specify what edit format the LLM should use (default depends on model)",
|
||||
)
|
||||
group.add_argument(
|
||||
"--architect",
|
||||
action="store_const",
|
||||
dest="edit_format",
|
||||
const="architect",
|
||||
help="Use architect edit format for the main chat",
|
||||
)
|
||||
group.add_argument(
|
||||
"--weak-model",
|
||||
metavar="WEAK_MODEL",
|
||||
@@ -190,6 +222,18 @@ def get_parser(default_config_files, git_root):
|
||||
" depends on --model)"
|
||||
),
|
||||
)
|
||||
group.add_argument(
|
||||
"--editor-model",
|
||||
metavar="EDITOR_MODEL",
|
||||
default=None,
|
||||
help="Specify the model to use for editor tasks (default depends on --model)",
|
||||
)
|
||||
group.add_argument(
|
||||
"--editor-edit-format",
|
||||
metavar="EDITOR_EDIT_FORMAT",
|
||||
default=None,
|
||||
help="Specify the edit format for the editor model (default: depends on editor model)",
|
||||
)
|
||||
group.add_argument(
|
||||
"--show-model-warnings",
|
||||
action=argparse.BooleanOptionalAction,
|
||||
@@ -197,17 +241,25 @@ def get_parser(default_config_files, git_root):
|
||||
help="Only work with models that have meta-data available (default: True)",
|
||||
)
|
||||
group.add_argument(
|
||||
"--map-tokens",
|
||||
"--max-chat-history-tokens",
|
||||
type=int,
|
||||
default=None,
|
||||
help="Suggested number of tokens to use for repo map, use 0 to disable (default: 1024)",
|
||||
help=(
|
||||
"Soft limit on tokens for chat history, after which summarization begins."
|
||||
" If unspecified, defaults to the model's max_chat_history_tokens."
|
||||
),
|
||||
)
|
||||
# This is a duplicate of the argument in the preparser and is a no-op by this time of
|
||||
# argument parsing, but it's here so that the help is displayed as expected.
|
||||
group.add_argument(
|
||||
"--map-refresh",
|
||||
choices=["auto", "always", "files", "manual"],
|
||||
default="auto",
|
||||
help="Control how often the repo map is refreshed (default: auto)",
|
||||
"--env-file",
|
||||
metavar="ENV_FILE",
|
||||
default=default_env_file(git_root),
|
||||
help="Specify the .env file to load (default: .env in git root)",
|
||||
)
|
||||
|
||||
##########
|
||||
group = parser.add_argument_group("Cache Settings")
|
||||
group.add_argument(
|
||||
"--cache-prompts",
|
||||
action=argparse.BooleanOptionalAction,
|
||||
@@ -220,29 +272,30 @@ def get_parser(default_config_files, git_root):
|
||||
default=0,
|
||||
help="Number of times to ping at 5min intervals to keep prompt cache warm (default: 0)",
|
||||
)
|
||||
|
||||
##########
|
||||
group = parser.add_argument_group("Repomap Settings")
|
||||
group.add_argument(
|
||||
"--map-tokens",
|
||||
type=int,
|
||||
default=None,
|
||||
help="Suggested number of tokens to use for repo map, use 0 to disable (default: 1024)",
|
||||
)
|
||||
group.add_argument(
|
||||
"--map-refresh",
|
||||
choices=["auto", "always", "files", "manual"],
|
||||
default="auto",
|
||||
help=(
|
||||
"Control how often the repo map is refreshed. Options: auto, always, files, manual"
|
||||
" (default: auto)"
|
||||
),
|
||||
)
|
||||
group.add_argument(
|
||||
"--map-multiplier-no-files",
|
||||
type=float,
|
||||
default=2,
|
||||
help="Multiplier for map tokens when no files are specified (default: 2)",
|
||||
)
|
||||
group.add_argument(
|
||||
"--max-chat-history-tokens",
|
||||
type=int,
|
||||
default=None,
|
||||
help=(
|
||||
"Maximum number of tokens to use for chat history. If not specified, uses the model's"
|
||||
" max_chat_history_tokens."
|
||||
),
|
||||
)
|
||||
# This is a duplicate of the argument in the preparser and is a no-op by this time of
|
||||
# argument parsing, but it's here so that the help is displayed as expected.
|
||||
group.add_argument(
|
||||
"--env-file",
|
||||
metavar="ENV_FILE",
|
||||
default=default_env_file(git_root),
|
||||
help="Specify the .env file to load (default: .env in git root)",
|
||||
)
|
||||
|
||||
##########
|
||||
group = parser.add_argument_group("History Files")
|
||||
@@ -328,6 +381,39 @@ def get_parser(default_config_files, git_root):
|
||||
default="#0088ff",
|
||||
help="Set the color for assistant output (default: #0088ff)",
|
||||
)
|
||||
group.add_argument(
|
||||
"--completion-menu-color",
|
||||
metavar="COLOR",
|
||||
default=None,
|
||||
help="Set the color for the completion menu (default: terminal's default text color)",
|
||||
)
|
||||
group.add_argument(
|
||||
"--completion-menu-bg-color",
|
||||
metavar="COLOR",
|
||||
default=None,
|
||||
help=(
|
||||
"Set the background color for the completion menu (default: terminal's default"
|
||||
" background color)"
|
||||
),
|
||||
)
|
||||
group.add_argument(
|
||||
"--completion-menu-current-color",
|
||||
metavar="COLOR",
|
||||
default=None,
|
||||
help=(
|
||||
"Set the color for the current item in the completion menu (default: terminal's default"
|
||||
" background color)"
|
||||
),
|
||||
)
|
||||
group.add_argument(
|
||||
"--completion-menu-current-bg-color",
|
||||
metavar="COLOR",
|
||||
default=None,
|
||||
help=(
|
||||
"Set the background color for the current item in the completion menu (default:"
|
||||
" terminal's default text color)"
|
||||
),
|
||||
)
|
||||
group.add_argument(
|
||||
"--code-theme",
|
||||
default="default",
|
||||
@@ -425,6 +511,12 @@ def get_parser(default_config_files, git_root):
|
||||
default=False,
|
||||
help="Perform a dry run without modifying files (default: False)",
|
||||
)
|
||||
group.add_argument(
|
||||
"--skip-sanity-check-repo",
|
||||
action="store_true",
|
||||
help="Skip the sanity check for the git repository (default: False)",
|
||||
default=False,
|
||||
)
|
||||
group = parser.add_argument_group("Fixing and committing")
|
||||
group.add_argument(
|
||||
"--lint",
|
||||
@@ -466,6 +558,25 @@ def get_parser(default_config_files, git_root):
|
||||
)
|
||||
|
||||
##########
|
||||
group = parser.add_argument_group("Analytics")
|
||||
group.add_argument(
|
||||
"--analytics",
|
||||
action=argparse.BooleanOptionalAction,
|
||||
default=False,
|
||||
help="Enable/disable analytics for one session (default: False)",
|
||||
)
|
||||
group.add_argument(
|
||||
"--analytics-log",
|
||||
metavar="ANALYTICS_LOG_FILE",
|
||||
help="Specify a file to log analytics events",
|
||||
)
|
||||
group.add_argument(
|
||||
"--analytics-disable",
|
||||
action="store_true",
|
||||
help="Permanently disable analytics",
|
||||
default=False,
|
||||
)
|
||||
|
||||
group = parser.add_argument_group("Other Settings")
|
||||
group.add_argument(
|
||||
"--file",
|
||||
@@ -485,12 +596,6 @@ def get_parser(default_config_files, git_root):
|
||||
help="Use VI editing mode in the terminal (default: False)",
|
||||
default=False,
|
||||
)
|
||||
group.add_argument(
|
||||
"--voice-language",
|
||||
metavar="VOICE_LANGUAGE",
|
||||
default="en",
|
||||
help="Specify the language for voice using ISO 639-1 code (default: auto)",
|
||||
)
|
||||
group.add_argument(
|
||||
"--chat-language",
|
||||
metavar="CHAT_LANGUAGE",
|
||||
@@ -534,7 +639,13 @@ def get_parser(default_config_files, git_root):
|
||||
help="Apply the changes from the given file instead of running the chat (debug)",
|
||||
)
|
||||
group.add_argument(
|
||||
"--yes",
|
||||
"--apply-clipboard-edits",
|
||||
action="store_true",
|
||||
help="Apply clipboard contents as edits using the main model's editor format",
|
||||
default=False,
|
||||
)
|
||||
group.add_argument(
|
||||
"--yes-always",
|
||||
action="store_true",
|
||||
help="Always say yes to every confirmation",
|
||||
default=None,
|
||||
@@ -582,6 +693,11 @@ def get_parser(default_config_files, git_root):
|
||||
" (disables chat mode)"
|
||||
),
|
||||
)
|
||||
group.add_argument(
|
||||
"--load",
|
||||
metavar="LOAD_FILE",
|
||||
help="Load and execute /commands from a file on launch",
|
||||
)
|
||||
group.add_argument(
|
||||
"--encoding",
|
||||
default="utf-8",
|
||||
@@ -600,8 +716,8 @@ def get_parser(default_config_files, git_root):
|
||||
group.add_argument(
|
||||
"--gui",
|
||||
"--browser",
|
||||
action="store_true",
|
||||
help="Run aider in your browser",
|
||||
action=argparse.BooleanOptionalAction,
|
||||
help="Run aider in your browser (default: False)",
|
||||
default=False,
|
||||
)
|
||||
group.add_argument(
|
||||
@@ -610,6 +726,28 @@ def get_parser(default_config_files, git_root):
|
||||
default=True,
|
||||
help="Enable/disable suggesting shell commands (default: True)",
|
||||
)
|
||||
group.add_argument(
|
||||
"--fancy-input",
|
||||
action=argparse.BooleanOptionalAction,
|
||||
default=True,
|
||||
help="Enable/disable fancy input with history and completion (default: True)",
|
||||
)
|
||||
|
||||
##########
|
||||
group = parser.add_argument_group("Voice Settings")
|
||||
group.add_argument(
|
||||
"--voice-format",
|
||||
metavar="VOICE_FORMAT",
|
||||
default="wav",
|
||||
choices=["wav", "mp3", "webm"],
|
||||
help="Audio format for voice recording (default: wav). webm and mp3 require ffmpeg",
|
||||
)
|
||||
group.add_argument(
|
||||
"--voice-language",
|
||||
metavar="VOICE_LANGUAGE",
|
||||
default="en",
|
||||
help="Specify the language for voice using ISO 639-1 code (default: auto)",
|
||||
)
|
||||
|
||||
return parser
|
||||
|
||||
@@ -625,7 +763,6 @@ def get_md_help():
|
||||
parser.formatter_class = MarkdownHelpFormatter
|
||||
|
||||
return argparse.ArgumentParser.format_help(parser)
|
||||
return parser.format_help()
|
||||
|
||||
|
||||
def get_sample_yaml():
|
||||
@@ -639,7 +776,6 @@ def get_sample_yaml():
|
||||
parser.formatter_class = YamlHelpFormatter
|
||||
|
||||
return argparse.ArgumentParser.format_help(parser)
|
||||
return parser.format_help()
|
||||
|
||||
|
||||
def get_sample_dotenv():
|
||||
@@ -653,7 +789,6 @@ def get_sample_dotenv():
|
||||
parser.formatter_class = DotEnvFormatter
|
||||
|
||||
return argparse.ArgumentParser.format_help(parser)
|
||||
return parser.format_help()
|
||||
|
||||
|
||||
def main():
|
||||
|
||||
@@ -147,7 +147,10 @@ class YamlHelpFormatter(argparse.HelpFormatter):
|
||||
elif action.nargs in ("*", "+") or isinstance(action, argparse._AppendAction):
|
||||
parts.append(f"#{switch}: xxx")
|
||||
parts.append("## Specify multiple values like this:")
|
||||
parts.append(f"#{switch}: [xxx,yyyy,zzz]\n")
|
||||
parts.append(f"#{switch}:")
|
||||
parts.append(f"# - xxx")
|
||||
parts.append(f"# - yyy")
|
||||
parts.append(f"# - zzz")
|
||||
else:
|
||||
parts.append(f"#{switch}: xxx\n")
|
||||
|
||||
|
||||
@@ -1,13 +1,16 @@
|
||||
from .architect_coder import ArchitectCoder
|
||||
from .ask_coder import AskCoder
|
||||
from .base_coder import Coder
|
||||
from .editblock_coder import EditBlockCoder
|
||||
from .editblock_fenced_coder import EditBlockFencedCoder
|
||||
from .editor_editblock_coder import EditorEditBlockCoder
|
||||
from .editor_whole_coder import EditorWholeFileCoder
|
||||
from .help_coder import HelpCoder
|
||||
|
||||
# from .single_wholefile_func_coder import SingleWholeFileFunctionCoder
|
||||
from .udiff_coder import UnifiedDiffCoder
|
||||
from .wholefile_coder import WholeFileCoder
|
||||
|
||||
# from .single_wholefile_func_coder import SingleWholeFileFunctionCoder
|
||||
|
||||
__all__ = [
|
||||
HelpCoder,
|
||||
AskCoder,
|
||||
@@ -17,4 +20,7 @@ __all__ = [
|
||||
WholeFileCoder,
|
||||
UnifiedDiffCoder,
|
||||
# SingleWholeFileFunctionCoder,
|
||||
ArchitectCoder,
|
||||
EditorEditBlockCoder,
|
||||
EditorWholeFileCoder,
|
||||
]
|
||||
|
||||
47
aider/coders/architect_coder.py
Normal file
47
aider/coders/architect_coder.py
Normal file
@@ -0,0 +1,47 @@
|
||||
from .architect_prompts import ArchitectPrompts
|
||||
from .ask_coder import AskCoder
|
||||
from .base_coder import Coder
|
||||
|
||||
|
||||
class ArchitectCoder(AskCoder):
|
||||
edit_format = "architect"
|
||||
gpt_prompts = ArchitectPrompts()
|
||||
|
||||
def reply_completed(self):
|
||||
content = self.partial_response_content
|
||||
|
||||
if not content or not content.strip():
|
||||
return
|
||||
|
||||
if not self.io.confirm_ask("Edit the files?"):
|
||||
return
|
||||
|
||||
kwargs = dict()
|
||||
|
||||
# Use the editor_model from the main_model if it exists, otherwise use the main_model itself
|
||||
editor_model = self.main_model.editor_model or self.main_model
|
||||
|
||||
kwargs["main_model"] = editor_model
|
||||
kwargs["edit_format"] = self.main_model.editor_edit_format
|
||||
kwargs["suggest_shell_commands"] = False
|
||||
kwargs["map_tokens"] = 0
|
||||
kwargs["total_cost"] = self.total_cost
|
||||
kwargs["cache_prompts"] = False
|
||||
kwargs["num_cache_warming_pings"] = 0
|
||||
kwargs["summarize_from_coder"] = False
|
||||
|
||||
new_kwargs = dict(io=self.io, from_coder=self)
|
||||
new_kwargs.update(kwargs)
|
||||
|
||||
editor_coder = Coder.create(**new_kwargs)
|
||||
editor_coder.cur_messages = []
|
||||
editor_coder.done_messages = []
|
||||
|
||||
if self.verbose:
|
||||
editor_coder.show_announcements()
|
||||
|
||||
editor_coder.run(with_message=content, preproc=False)
|
||||
|
||||
self.move_back_cur_messages("I made those changes to the files.")
|
||||
self.total_cost = editor_coder.total_cost
|
||||
self.aider_commit_hashes = editor_coder.aider_commit_hashes
|
||||
40
aider/coders/architect_prompts.py
Normal file
40
aider/coders/architect_prompts.py
Normal file
@@ -0,0 +1,40 @@
|
||||
# flake8: noqa: E501
|
||||
|
||||
from .base_prompts import CoderPrompts
|
||||
|
||||
|
||||
class ArchitectPrompts(CoderPrompts):
|
||||
main_system = """Act as an expert architect engineer and provide direction to your editor engineer.
|
||||
Study the change request and the current code.
|
||||
Describe how to modify the code to complete the request.
|
||||
The editor engineer will rely solely on your instructions, so make them unambiguous and complete.
|
||||
Explain all needed code changes clearly and completely, but concisely.
|
||||
Just show the changes needed.
|
||||
|
||||
DO NOT show the entire updated function/file/etc!
|
||||
|
||||
Always reply to the user in {language}.
|
||||
"""
|
||||
|
||||
example_messages = []
|
||||
|
||||
files_content_prefix = """I have *added these files to the chat* so you see all of their contents.
|
||||
*Trust this message as the true contents of the files!*
|
||||
Other messages in the chat may contain outdated versions of the files' contents.
|
||||
""" # noqa: E501
|
||||
|
||||
files_content_assistant_reply = (
|
||||
"Ok, I will use that as the true, current contents of the files."
|
||||
)
|
||||
|
||||
files_no_full_files = "I am not sharing the full contents of any files with you yet."
|
||||
|
||||
files_no_full_files_with_repo_map = ""
|
||||
files_no_full_files_with_repo_map_reply = ""
|
||||
|
||||
repo_content_prefix = """I am working with you on code in a git repository.
|
||||
Here are summaries of some files present in my git repo.
|
||||
If you need to see the full contents of any files to answer my questions, ask me to *add them to the chat*.
|
||||
"""
|
||||
|
||||
system_reminder = ""
|
||||
@@ -6,8 +6,7 @@ from .base_prompts import CoderPrompts
|
||||
class AskPrompts(CoderPrompts):
|
||||
main_system = """Act as an expert code analyst.
|
||||
Answer questions about the supplied code.
|
||||
|
||||
Always reply to the user in the same language they are using.
|
||||
Always reply to the user in {language}.
|
||||
"""
|
||||
|
||||
example_messages = []
|
||||
@@ -17,6 +16,10 @@ Always reply to the user in the same language they are using.
|
||||
Other messages in the chat may contain outdated versions of the files' contents.
|
||||
""" # noqa: E501
|
||||
|
||||
files_content_assistant_reply = (
|
||||
"Ok, I will use that as the true, current contents of the files."
|
||||
)
|
||||
|
||||
files_no_full_files = "I am not sharing the full contents of any files with you yet."
|
||||
|
||||
files_no_full_files_with_repo_map = ""
|
||||
|
||||
@@ -17,9 +17,12 @@ from collections import defaultdict
|
||||
from datetime import datetime
|
||||
from json.decoder import JSONDecodeError
|
||||
from pathlib import Path
|
||||
from typing import List
|
||||
|
||||
from aider import __version__, models, prompts, urls, utils
|
||||
from aider.analytics import Analytics
|
||||
from aider.commands import Commands
|
||||
from aider.exceptions import LiteLLMExceptions
|
||||
from aider.history import ChatSummary
|
||||
from aider.io import ConfirmGroup, InputOutput
|
||||
from aider.linter import Linter
|
||||
@@ -27,7 +30,7 @@ from aider.llm import litellm
|
||||
from aider.repo import ANY_GIT_ERROR, GitRepo
|
||||
from aider.repomap import RepoMap
|
||||
from aider.run_cmd import run_cmd
|
||||
from aider.sendchat import retry_exceptions, send_completion
|
||||
from aider.sendchat import RETRY_TIMEOUT, send_completion
|
||||
from aider.utils import format_content, format_messages, format_tokens, is_image_file
|
||||
|
||||
from ..dump import dump # noqa: F401
|
||||
@@ -180,6 +183,13 @@ class Coder:
|
||||
output += ", infinite output"
|
||||
lines.append(output)
|
||||
|
||||
if self.edit_format == "architect":
|
||||
output = (
|
||||
f"Editor model: {main_model.editor_model.name} with"
|
||||
f" {main_model.editor_edit_format} edit format"
|
||||
)
|
||||
lines.append(output)
|
||||
|
||||
if weak_model is not main_model:
|
||||
output = f"Weak model: {weak_model.name}"
|
||||
lines.append(output)
|
||||
@@ -251,12 +261,17 @@ class Coder:
|
||||
commands=None,
|
||||
summarizer=None,
|
||||
total_cost=0.0,
|
||||
analytics=None,
|
||||
map_refresh="auto",
|
||||
cache_prompts=False,
|
||||
num_cache_warming_pings=0,
|
||||
suggest_shell_commands=True,
|
||||
chat_language=None,
|
||||
):
|
||||
# Fill in a dummy Analytics if needed, but it is never .enable()'d
|
||||
self.analytics = analytics if analytics is not None else Analytics()
|
||||
|
||||
self.event = self.analytics.event
|
||||
self.chat_language = chat_language
|
||||
self.commit_before_message = []
|
||||
self.aider_commit_hashes = set()
|
||||
@@ -340,6 +355,9 @@ class Coder:
|
||||
|
||||
for fname in fnames:
|
||||
fname = Path(fname)
|
||||
if self.repo and self.repo.git_ignored_file(fname):
|
||||
self.io.tool_warning(f"Skipping {fname} that matches gitignore spec.")
|
||||
|
||||
if self.repo and self.repo.ignored_file(fname):
|
||||
self.io.tool_warning(f"Skipping {fname} that matches aiderignore spec.")
|
||||
continue
|
||||
@@ -648,7 +666,7 @@ class Coder:
|
||||
if self.abs_fnames:
|
||||
files_content = self.gpt_prompts.files_content_prefix
|
||||
files_content += self.get_files_content()
|
||||
files_reply = "Ok, any changes I propose will be to those files."
|
||||
files_reply = self.gpt_prompts.files_content_assistant_reply
|
||||
elif self.get_repo_map() and self.gpt_prompts.files_no_full_files_with_repo_map:
|
||||
files_content = self.gpt_prompts.files_no_full_files_with_repo_map
|
||||
files_reply = self.gpt_prompts.files_no_full_files_with_repo_map_reply
|
||||
@@ -672,7 +690,7 @@ class Coder:
|
||||
return chat_files_messages
|
||||
|
||||
def get_images_message(self):
|
||||
if not self.main_model.accepts_images:
|
||||
if not self.main_model.info.get("supports_vision"):
|
||||
return None
|
||||
|
||||
image_messages = []
|
||||
@@ -775,16 +793,37 @@ class Coder:
|
||||
self.num_reflections += 1
|
||||
message = self.reflected_message
|
||||
|
||||
def check_for_urls(self, inp):
|
||||
def check_and_open_urls(self, exc, friendly_msg=None):
|
||||
"""Check exception for URLs, offer to open in a browser, with user-friendly error msgs."""
|
||||
text = str(exc)
|
||||
|
||||
if friendly_msg:
|
||||
self.io.tool_warning(text)
|
||||
self.io.tool_error(f"{friendly_msg}")
|
||||
else:
|
||||
self.io.tool_error(text)
|
||||
|
||||
url_pattern = re.compile(r"(https?://[^\s/$.?#].[^\s]*)")
|
||||
urls = list(set(url_pattern.findall(text))) # Use set to remove duplicates
|
||||
for url in urls:
|
||||
url = url.rstrip(".',\"")
|
||||
self.io.offer_url(url)
|
||||
return urls
|
||||
|
||||
def check_for_urls(self, inp: str) -> List[str]:
|
||||
"""Check input for URLs and offer to add them to the chat."""
|
||||
url_pattern = re.compile(r"(https?://[^\s/$.?#].[^\s]*[^\s,.])")
|
||||
urls = list(set(url_pattern.findall(inp))) # Use set to remove duplicates
|
||||
added_urls = []
|
||||
group = ConfirmGroup(urls)
|
||||
for url in urls:
|
||||
if url not in self.rejected_urls:
|
||||
if self.io.confirm_ask("Add URL to the chat?", subject=url, group=group):
|
||||
url = url.rstrip(".',\"")
|
||||
if self.io.confirm_ask(
|
||||
"Add URL to the chat?", subject=url, group=group, allow_never=True
|
||||
):
|
||||
inp += "\n\n"
|
||||
inp += self.commands.cmd_web(url)
|
||||
inp += self.commands.cmd_web(url, return_content=True)
|
||||
added_urls.append(url)
|
||||
else:
|
||||
self.rejected_urls.add(url)
|
||||
@@ -920,12 +959,18 @@ class Coder:
|
||||
platform=platform_text
|
||||
)
|
||||
|
||||
if self.chat_language:
|
||||
language = self.chat_language
|
||||
else:
|
||||
language = "in the same language they are using"
|
||||
|
||||
prompt = prompt.format(
|
||||
fence=self.fence,
|
||||
lazy_prompt=lazy_prompt,
|
||||
platform=platform_text,
|
||||
shell_cmd_prompt=shell_cmd_prompt,
|
||||
shell_cmd_reminder=shell_cmd_reminder,
|
||||
language=language,
|
||||
)
|
||||
return prompt
|
||||
|
||||
@@ -1064,13 +1109,15 @@ class Coder:
|
||||
self.warming_pings_left -= 1
|
||||
self.next_cache_warm = time.time() + delay
|
||||
|
||||
kwargs = dict(self.main_model.extra_params) or dict()
|
||||
kwargs["max_tokens"] = 1
|
||||
|
||||
try:
|
||||
completion = litellm.completion(
|
||||
model=self.main_model.name,
|
||||
messages=self.cache_warming_chunks.cacheable_messages(),
|
||||
stream=False,
|
||||
max_tokens=1,
|
||||
extra_headers=self.main_model.extra_headers,
|
||||
**kwargs,
|
||||
)
|
||||
except Exception as err:
|
||||
self.io.tool_warning(f"Cache warming error: {str(err)}")
|
||||
@@ -1102,10 +1149,15 @@ class Coder:
|
||||
utils.show_messages(messages, functions=self.functions)
|
||||
|
||||
self.multi_response_content = ""
|
||||
self.mdstream = self.io.assistant_output("", self.stream)
|
||||
if self.show_pretty() and self.stream:
|
||||
self.mdstream = self.io.get_assistant_mdstream()
|
||||
else:
|
||||
self.mdstream = None
|
||||
|
||||
retry_delay = 0.125
|
||||
|
||||
litellm_ex = LiteLLMExceptions()
|
||||
|
||||
self.usage_report = None
|
||||
exhausted = False
|
||||
interrupted = False
|
||||
@@ -1114,24 +1166,37 @@ class Coder:
|
||||
try:
|
||||
yield from self.send(messages, functions=self.functions)
|
||||
break
|
||||
except retry_exceptions() as err:
|
||||
self.io.tool_warning(str(err))
|
||||
retry_delay *= 2
|
||||
if retry_delay > 60:
|
||||
except litellm_ex.exceptions_tuple() as err:
|
||||
ex_info = litellm_ex.get_ex_info(err)
|
||||
|
||||
if ex_info.name == "ContextWindowExceededError":
|
||||
exhausted = True
|
||||
break
|
||||
|
||||
should_retry = ex_info.retry
|
||||
if should_retry:
|
||||
retry_delay *= 2
|
||||
if retry_delay > RETRY_TIMEOUT:
|
||||
should_retry = False
|
||||
|
||||
if not should_retry:
|
||||
self.mdstream = None
|
||||
self.check_and_open_urls(err, ex_info.description)
|
||||
break
|
||||
|
||||
err_msg = str(err)
|
||||
if ex_info.description:
|
||||
self.io.tool_warning(err_msg)
|
||||
self.io.tool_error(ex_info.description)
|
||||
else:
|
||||
self.io.tool_error(err_msg)
|
||||
|
||||
self.io.tool_output(f"Retrying in {retry_delay:.1f} seconds...")
|
||||
time.sleep(retry_delay)
|
||||
continue
|
||||
except KeyboardInterrupt:
|
||||
interrupted = True
|
||||
break
|
||||
except litellm.ContextWindowExceededError:
|
||||
# The input is overflowing the context window!
|
||||
exhausted = True
|
||||
break
|
||||
except litellm.exceptions.BadRequestError as br_err:
|
||||
self.io.tool_error(f"BadRequestError: {br_err}")
|
||||
return
|
||||
except FinishReasonLength:
|
||||
# We hit the output limit!
|
||||
if not self.main_model.info.get("supports_assistant_prefill"):
|
||||
@@ -1147,9 +1212,10 @@ class Coder:
|
||||
dict(role="assistant", content=self.multi_response_content, prefix=True)
|
||||
)
|
||||
except Exception as err:
|
||||
self.io.tool_error(f"Unexpected error: {err}")
|
||||
self.mdstream = None
|
||||
lines = traceback.format_exception(type(err), err, err.__traceback__)
|
||||
self.io.tool_error("".join(lines))
|
||||
self.io.tool_warning("".join(lines))
|
||||
self.io.tool_error(str(err))
|
||||
return
|
||||
finally:
|
||||
if self.mdstream:
|
||||
@@ -1179,6 +1245,11 @@ class Coder:
|
||||
else:
|
||||
content = ""
|
||||
|
||||
try:
|
||||
self.reply_completed()
|
||||
except KeyboardInterrupt:
|
||||
interrupted = True
|
||||
|
||||
if interrupted:
|
||||
content += "\n^C KeyboardInterrupt"
|
||||
self.cur_messages += [dict(role="assistant", content=content)]
|
||||
@@ -1235,6 +1306,9 @@ class Coder:
|
||||
else:
|
||||
self.reflected_message = add_rel_files_message
|
||||
|
||||
def reply_completed(self):
|
||||
pass
|
||||
|
||||
def show_exhausted_error(self):
|
||||
output_tokens = 0
|
||||
if self.partial_response_content:
|
||||
@@ -1286,11 +1360,9 @@ class Coder:
|
||||
res.append("- Use /clear to clear the chat history.")
|
||||
res.append("- Break your code into smaller source files.")
|
||||
|
||||
res.append("")
|
||||
res.append(f"For more info: {urls.token_limits}")
|
||||
|
||||
res = "".join([line + "\n" for line in res])
|
||||
self.io.tool_error(res)
|
||||
self.io.offer_url(urls.token_limits)
|
||||
|
||||
def lint_edited(self, fnames):
|
||||
res = ""
|
||||
@@ -1364,7 +1436,7 @@ class Coder:
|
||||
added_fnames = []
|
||||
group = ConfirmGroup(new_mentions)
|
||||
for rel_fname in sorted(new_mentions):
|
||||
if self.io.confirm_ask(f"Add {rel_fname} to the chat?", group=group):
|
||||
if self.io.confirm_ask(f"Add {rel_fname} to the chat?", group=group, allow_never=True):
|
||||
self.add_rel_fname(rel_fname)
|
||||
added_fnames.append(rel_fname)
|
||||
else:
|
||||
@@ -1395,8 +1467,7 @@ class Coder:
|
||||
functions,
|
||||
self.stream,
|
||||
temp,
|
||||
extra_headers=model.extra_headers,
|
||||
max_tokens=model.max_tokens,
|
||||
extra_params=model.extra_params,
|
||||
)
|
||||
self.chat_completion_call_hashes.append(hash_object.hexdigest())
|
||||
|
||||
@@ -1459,7 +1530,7 @@ class Coder:
|
||||
raise Exception("No data found in LLM response!")
|
||||
|
||||
show_resp = self.render_incremental_response(True)
|
||||
self.io.assistant_output(show_resp)
|
||||
self.io.assistant_output(show_resp, pretty=self.show_pretty())
|
||||
|
||||
if (
|
||||
hasattr(completion.choices[0], "finish_reason")
|
||||
@@ -1535,7 +1606,6 @@ class Coder:
|
||||
completion.usage, "cache_creation_input_tokens"
|
||||
):
|
||||
self.message_tokens_sent += prompt_tokens
|
||||
self.message_tokens_sent += cache_hit_tokens
|
||||
self.message_tokens_sent += cache_write_tokens
|
||||
else:
|
||||
self.message_tokens_sent += prompt_tokens
|
||||
@@ -1617,11 +1687,27 @@ class Coder:
|
||||
self.usage_report = tokens_report + sep + cost_report
|
||||
|
||||
def show_usage_report(self):
|
||||
if self.usage_report:
|
||||
self.io.tool_output(self.usage_report)
|
||||
self.message_cost = 0.0
|
||||
self.message_tokens_sent = 0
|
||||
self.message_tokens_received = 0
|
||||
if not self.usage_report:
|
||||
return
|
||||
|
||||
self.io.tool_output(self.usage_report)
|
||||
|
||||
prompt_tokens = self.message_tokens_sent
|
||||
completion_tokens = self.message_tokens_received
|
||||
self.event(
|
||||
"message_send",
|
||||
main_model=self.main_model,
|
||||
edit_format=self.edit_format,
|
||||
prompt_tokens=prompt_tokens,
|
||||
completion_tokens=completion_tokens,
|
||||
total_tokens=prompt_tokens + completion_tokens,
|
||||
cost=self.message_cost,
|
||||
total_cost=self.total_cost,
|
||||
)
|
||||
|
||||
self.message_cost = 0.0
|
||||
self.message_tokens_sent = 0
|
||||
self.message_tokens_received = 0
|
||||
|
||||
def get_multi_response_content(self, final=False):
|
||||
cur = self.multi_response_content or ""
|
||||
@@ -1696,6 +1782,10 @@ class Coder:
|
||||
self.check_for_dirty_commit(path)
|
||||
return True
|
||||
|
||||
if self.repo and self.repo.git_ignored_file(path):
|
||||
self.io.tool_warning(f"Skipping edits to {path} that matches gitignore spec.")
|
||||
return
|
||||
|
||||
if not Path(full_path).exists():
|
||||
if not self.io.confirm_ask("Create new file?", subject=path):
|
||||
self.io.tool_output(f"Skipping edits to {path}")
|
||||
@@ -1790,8 +1880,10 @@ class Coder:
|
||||
edited = set()
|
||||
try:
|
||||
edits = self.get_edits()
|
||||
edits = self.apply_edits_dry_run(edits)
|
||||
edits = self.prepare_to_edit(edits)
|
||||
edited = set(edit[0] for edit in edits)
|
||||
|
||||
self.apply_edits(edits)
|
||||
except ValueError as err:
|
||||
self.num_malformed_responses += 1
|
||||
@@ -1919,6 +2011,9 @@ class Coder:
|
||||
def apply_edits(self, edits):
|
||||
return
|
||||
|
||||
def apply_edits_dry_run(self, edits):
|
||||
return edits
|
||||
|
||||
def run_shell_commands(self):
|
||||
if not self.suggest_shell_commands:
|
||||
return ""
|
||||
@@ -1942,7 +2037,11 @@ class Coder:
|
||||
)
|
||||
prompt = "Run shell command?" if command_count == 1 else "Run shell commands?"
|
||||
if not self.io.confirm_ask(
|
||||
prompt, subject="\n".join(commands), explicit_yes_required=True, group=group
|
||||
prompt,
|
||||
subject="\n".join(commands),
|
||||
explicit_yes_required=True,
|
||||
group=group,
|
||||
allow_never=True,
|
||||
):
|
||||
return
|
||||
|
||||
@@ -1961,7 +2060,7 @@ class Coder:
|
||||
accumulated_output += f"Output from {command}\n{output}\n"
|
||||
|
||||
if accumulated_output.strip() and not self.io.confirm_ask(
|
||||
"Add command output to the chat?"
|
||||
"Add command output to the chat?", allow_never=True
|
||||
):
|
||||
accumulated_output = ""
|
||||
|
||||
|
||||
@@ -22,6 +22,8 @@ You always COMPLETELY IMPLEMENT the needed code!
|
||||
Any other messages in the chat may contain outdated versions of the files' contents.
|
||||
""" # noqa: E501
|
||||
|
||||
files_content_assistant_reply = "Ok, any changes I propose will be to those files."
|
||||
|
||||
files_no_full_files = "I am not sharing any files that you can edit yet."
|
||||
|
||||
files_no_full_files_with_repo_map = """Don't try and edit any existing code without asking me to add the files to the chat!
|
||||
|
||||
@@ -35,29 +35,47 @@ class EditBlockCoder(Coder):
|
||||
|
||||
return edits
|
||||
|
||||
def apply_edits(self, edits):
|
||||
def apply_edits_dry_run(self, edits):
|
||||
return self.apply_edits(edits, dry_run=True)
|
||||
|
||||
def apply_edits(self, edits, dry_run=False):
|
||||
failed = []
|
||||
passed = []
|
||||
updated_edits = []
|
||||
|
||||
for edit in edits:
|
||||
path, original, updated = edit
|
||||
full_path = self.abs_root_path(path)
|
||||
content = self.io.read_text(full_path)
|
||||
new_content = do_replace(full_path, content, original, updated, self.fence)
|
||||
if not new_content:
|
||||
new_content = None
|
||||
|
||||
if Path(full_path).exists():
|
||||
content = self.io.read_text(full_path)
|
||||
new_content = do_replace(full_path, content, original, updated, self.fence)
|
||||
|
||||
# If the edit failed, and
|
||||
# this is not a "create a new file" with an empty original...
|
||||
# https://github.com/Aider-AI/aider/issues/2258
|
||||
if not new_content and original.strip():
|
||||
# try patching any of the other files in the chat
|
||||
for full_path in self.abs_fnames:
|
||||
content = self.io.read_text(full_path)
|
||||
new_content = do_replace(full_path, content, original, updated, self.fence)
|
||||
if new_content:
|
||||
path = self.get_rel_fname(full_path)
|
||||
break
|
||||
|
||||
updated_edits.append((path, original, updated))
|
||||
|
||||
if new_content:
|
||||
self.io.write_text(full_path, new_content)
|
||||
if not dry_run:
|
||||
self.io.write_text(full_path, new_content)
|
||||
passed.append(edit)
|
||||
else:
|
||||
failed.append(edit)
|
||||
|
||||
if dry_run:
|
||||
return updated_edits
|
||||
|
||||
if not failed:
|
||||
return
|
||||
|
||||
@@ -365,9 +383,9 @@ def do_replace(fname, content, before_text, after_text, fence=None):
|
||||
return new_content
|
||||
|
||||
|
||||
HEAD = r"<{5,9} SEARCH"
|
||||
DIVIDER = r"={5,9}"
|
||||
UPDATED = r">{5,9} REPLACE"
|
||||
HEAD = r"^<{5,9} SEARCH\s*$"
|
||||
DIVIDER = r"^={5,9}\s*$"
|
||||
UPDATED = r"^>{5,9} REPLACE\s*$"
|
||||
|
||||
HEAD_ERR = "<<<<<<< SEARCH"
|
||||
DIVIDER_ERR = "======="
|
||||
@@ -400,7 +418,7 @@ def strip_filename(filename, fence):
|
||||
filename = filename.strip("`")
|
||||
filename = filename.strip("*")
|
||||
|
||||
# https://github.com/paul-gauthier/aider/issues/1158
|
||||
# https://github.com/Aider-AI/aider/issues/1158
|
||||
# filename = filename.replace("\\_", "_")
|
||||
|
||||
return filename
|
||||
|
||||
@@ -11,7 +11,7 @@ Respect and use existing conventions, libraries, etc that are already present in
|
||||
Take requests for changes to the supplied code.
|
||||
If the request is ambiguous, ask questions.
|
||||
|
||||
Always reply to the user in the same language they are using.
|
||||
Always reply to the user in {language}.
|
||||
|
||||
Once you understand the request you MUST:
|
||||
|
||||
@@ -34,7 +34,7 @@ ONLY EVER RETURN CODE IN A *SEARCH/REPLACE BLOCK*!
|
||||
4. *Concisely* suggest any shell commands the user might want to run in ```bash blocks.
|
||||
|
||||
Just suggest shell commands this way, not example code.
|
||||
Only suggest complete shell commands that area ready to execute, without placeholders.
|
||||
Only suggest complete shell commands that are ready to execute, without placeholders.
|
||||
Only suggest at most a few shell commands at a time, not more than 1-3.
|
||||
|
||||
Use the appropriate shell based on the user's system info:
|
||||
@@ -159,8 +159,9 @@ Use the *FULL* file path, as shown to you by the user.
|
||||
Every *SEARCH* section must *EXACTLY MATCH* the existing file content, character for character, including all comments, docstrings, etc.
|
||||
If the file contains code or other data wrapped/escaped in json/xml/quotes or other containers, you need to propose edits to the literal contents of the file, including the container markup.
|
||||
|
||||
*SEARCH/REPLACE* blocks will replace *all* matching occurrences.
|
||||
Include enough lines to make the SEARCH blocks uniquely match the lines to change.
|
||||
*SEARCH/REPLACE* blocks will *only* replace the first match occurrence.
|
||||
Including multiple unique *SEARCH/REPLACE* blocks if needed.
|
||||
Include enough lines in each SEARCH section to uniquely match each set of lines that need to change.
|
||||
|
||||
Keep *SEARCH/REPLACE* blocks concise.
|
||||
Break large *SEARCH/REPLACE* blocks into a series of smaller blocks that each change a small portion of the file.
|
||||
|
||||
7
aider/coders/editor_editblock_coder.py
Normal file
7
aider/coders/editor_editblock_coder.py
Normal file
@@ -0,0 +1,7 @@
|
||||
from .editblock_coder import EditBlockCoder
|
||||
from .editor_editblock_prompts import EditorEditBlockPrompts
|
||||
|
||||
|
||||
class EditorEditBlockCoder(EditBlockCoder):
|
||||
edit_format = "editor-diff"
|
||||
gpt_prompts = EditorEditBlockPrompts()
|
||||
16
aider/coders/editor_editblock_prompts.py
Normal file
16
aider/coders/editor_editblock_prompts.py
Normal file
@@ -0,0 +1,16 @@
|
||||
# flake8: noqa: E501
|
||||
|
||||
from .editblock_prompts import EditBlockPrompts
|
||||
|
||||
|
||||
class EditorEditBlockPrompts(EditBlockPrompts):
|
||||
main_system = """Act as an expert software developer who edits source code.
|
||||
{lazy_prompt}
|
||||
Describe each change with a *SEARCH/REPLACE block* per the examples below.
|
||||
All changes to files must use this *SEARCH/REPLACE block* format.
|
||||
ONLY EVER RETURN CODE IN A *SEARCH/REPLACE BLOCK*!
|
||||
"""
|
||||
|
||||
shell_cmd_prompt = ""
|
||||
no_shell_cmd_prompt = ""
|
||||
shell_cmd_reminder = ""
|
||||
7
aider/coders/editor_whole_coder.py
Normal file
7
aider/coders/editor_whole_coder.py
Normal file
@@ -0,0 +1,7 @@
|
||||
from .editor_whole_prompts import EditorWholeFilePrompts
|
||||
from .wholefile_coder import WholeFileCoder
|
||||
|
||||
|
||||
class EditorWholeFileCoder(WholeFileCoder):
|
||||
edit_format = "editor-whole"
|
||||
gpt_prompts = EditorWholeFilePrompts()
|
||||
10
aider/coders/editor_whole_prompts.py
Normal file
10
aider/coders/editor_whole_prompts.py
Normal file
@@ -0,0 +1,10 @@
|
||||
# flake8: noqa: E501
|
||||
|
||||
from .wholefile_prompts import WholeFilePrompts
|
||||
|
||||
|
||||
class EditorWholeFilePrompts(WholeFilePrompts):
|
||||
main_system = """Act as an expert software developer and make changes to source code.
|
||||
{lazy_prompt}
|
||||
Output a copy of each file that needs changes.
|
||||
"""
|
||||
@@ -12,7 +12,7 @@ Respect and use existing conventions, libraries, etc that are already present in
|
||||
Take requests for changes to the supplied code.
|
||||
If the request is ambiguous, ask questions.
|
||||
|
||||
Always reply to the user in the same language they are using.
|
||||
Always reply to the user in {language}.
|
||||
|
||||
For each file that needs to be changed, write out the changes similar to a unified diff like `diff -U0` would produce.
|
||||
"""
|
||||
|
||||
@@ -58,6 +58,8 @@ class WholeFileCoder(Coder):
|
||||
fname = fname.strip("*") # handle **filename.py**
|
||||
fname = fname.rstrip(":")
|
||||
fname = fname.strip("`")
|
||||
fname = fname.lstrip("#")
|
||||
fname = fname.strip()
|
||||
|
||||
# Issue #1232
|
||||
if len(fname) > 250:
|
||||
|
||||
@@ -8,7 +8,7 @@ class WholeFilePrompts(CoderPrompts):
|
||||
Take requests for changes to the supplied code.
|
||||
If the request is ambiguous, ask questions.
|
||||
|
||||
Always reply to the user in the same language they are using.
|
||||
Always reply to the user in {language}.
|
||||
|
||||
{lazy_prompt}
|
||||
Once you understand the request you MUST:
|
||||
@@ -52,7 +52,7 @@ path/to/filename.js
|
||||
{fence[1]}
|
||||
|
||||
Every *file listing* MUST use this format:
|
||||
- First line: the filename with any originally provided path
|
||||
- First line: the filename with any originally provided path; no extra markup, punctuation, comments, etc. **JUST** the filename with path.
|
||||
- Second line: opening {fence[0]}
|
||||
- ... entire content of the file ...
|
||||
- Final line: closing {fence[1]}
|
||||
|
||||
@@ -1,13 +1,17 @@
|
||||
import glob
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
from collections import OrderedDict
|
||||
from os.path import expanduser
|
||||
from pathlib import Path
|
||||
|
||||
import pyperclip
|
||||
from PIL import Image, ImageGrab
|
||||
from prompt_toolkit.completion import Completion, PathCompleter
|
||||
from prompt_toolkit.document import Document
|
||||
|
||||
from aider import models, prompts, voice
|
||||
from aider.format_settings import format_settings
|
||||
@@ -135,7 +139,7 @@ class Commands:
|
||||
else:
|
||||
self.io.tool_output("Please provide a partial model name to search for.")
|
||||
|
||||
def cmd_web(self, args):
|
||||
def cmd_web(self, args, return_content=False):
|
||||
"Scrape a webpage, convert to markdown and send in a message"
|
||||
|
||||
url = args.strip()
|
||||
@@ -154,15 +158,28 @@ class Commands:
|
||||
)
|
||||
|
||||
content = self.scraper.scrape(url) or ""
|
||||
content = f"{url}:\n\n" + content
|
||||
content = f"Here is the content of {url}:\n\n" + content
|
||||
if return_content:
|
||||
return content
|
||||
|
||||
self.io.tool_output("... done.")
|
||||
self.io.tool_output("... added to chat.")
|
||||
|
||||
return content
|
||||
self.coder.cur_messages += [
|
||||
dict(role="user", content=content),
|
||||
dict(role="assistant", content="Ok."),
|
||||
]
|
||||
|
||||
def is_command(self, inp):
|
||||
return inp[0] in "/!"
|
||||
|
||||
def get_raw_completions(self, cmd):
|
||||
assert cmd.startswith("/")
|
||||
cmd = cmd[1:]
|
||||
cmd = cmd.replace("-", "_")
|
||||
|
||||
raw_completer = getattr(self, f"completions_raw_{cmd}", None)
|
||||
return raw_completer
|
||||
|
||||
def get_completions(self, cmd):
|
||||
assert cmd.startswith("/")
|
||||
cmd = cmd[1:]
|
||||
@@ -211,6 +228,7 @@ class Commands:
|
||||
|
||||
def run(self, inp):
|
||||
if inp.startswith("!"):
|
||||
self.coder.event("command_run")
|
||||
return self.do_run("run", inp[1:])
|
||||
|
||||
res = self.matching_commands(inp)
|
||||
@@ -218,9 +236,13 @@ class Commands:
|
||||
return
|
||||
matching_commands, first_word, rest_inp = res
|
||||
if len(matching_commands) == 1:
|
||||
return self.do_run(matching_commands[0][1:], rest_inp)
|
||||
command = matching_commands[0][1:]
|
||||
self.coder.event(f"command_{command}")
|
||||
return self.do_run(command, rest_inp)
|
||||
elif first_word in matching_commands:
|
||||
return self.do_run(first_word[1:], rest_inp)
|
||||
command = first_word[1:]
|
||||
self.coder.event(f"command_{command}")
|
||||
return self.do_run(command, rest_inp)
|
||||
elif len(matching_commands) > 1:
|
||||
self.io.tool_error(f"Ambiguous command: {', '.join(matching_commands)}")
|
||||
else:
|
||||
@@ -569,8 +591,62 @@ class Commands:
|
||||
fname = f'"{fname}"'
|
||||
return fname
|
||||
|
||||
def completions_read_only(self):
|
||||
return self.completions_add()
|
||||
def completions_raw_read_only(self, document, complete_event):
|
||||
# Get the text before the cursor
|
||||
text = document.text_before_cursor
|
||||
|
||||
# Skip the first word and the space after it
|
||||
after_command = text.split()[-1]
|
||||
|
||||
# Create a new Document object with the text after the command
|
||||
new_document = Document(after_command, cursor_position=len(after_command))
|
||||
|
||||
def get_paths():
|
||||
return [self.coder.root] if self.coder.root else None
|
||||
|
||||
path_completer = PathCompleter(
|
||||
get_paths=get_paths,
|
||||
only_directories=False,
|
||||
expanduser=True,
|
||||
)
|
||||
|
||||
# Adjust the start_position to replace all of 'after_command'
|
||||
adjusted_start_position = -len(after_command)
|
||||
|
||||
# Collect all completions
|
||||
all_completions = []
|
||||
|
||||
# Iterate over the completions and modify them
|
||||
for completion in path_completer.get_completions(new_document, complete_event):
|
||||
quoted_text = self.quote_fname(after_command + completion.text)
|
||||
all_completions.append(
|
||||
Completion(
|
||||
text=quoted_text,
|
||||
start_position=adjusted_start_position,
|
||||
display=completion.display,
|
||||
style=completion.style,
|
||||
selected_style=completion.selected_style,
|
||||
)
|
||||
)
|
||||
|
||||
# Add completions from the 'add' command
|
||||
add_completions = self.completions_add()
|
||||
for completion in add_completions:
|
||||
if after_command in completion:
|
||||
all_completions.append(
|
||||
Completion(
|
||||
text=completion,
|
||||
start_position=adjusted_start_position,
|
||||
display=completion,
|
||||
)
|
||||
)
|
||||
|
||||
# Sort all completions based on their text
|
||||
sorted_completions = sorted(all_completions, key=lambda c: c.text)
|
||||
|
||||
# Yield the sorted completions
|
||||
for completion in sorted_completions:
|
||||
yield completion
|
||||
|
||||
def completions_add(self):
|
||||
files = set(self.coder.get_all_relative_files())
|
||||
@@ -588,7 +664,7 @@ class Commands:
|
||||
else:
|
||||
try:
|
||||
raw_matched_files = list(Path(self.coder.root).glob(pattern))
|
||||
except IndexError:
|
||||
except (IndexError, AttributeError):
|
||||
raw_matched_files = []
|
||||
except ValueError as err:
|
||||
self.io.tool_error(f"Error matching {pattern}: {err}")
|
||||
@@ -658,7 +734,7 @@ class Commands:
|
||||
except OSError as e:
|
||||
self.io.tool_error(f"Error creating file {fname}: {e}")
|
||||
|
||||
for matched_file in all_matched_files:
|
||||
for matched_file in sorted(all_matched_files):
|
||||
abs_file_path = self.coder.abs_root_path(matched_file)
|
||||
|
||||
if not abs_file_path.startswith(self.coder.root) and not is_image_file(matched_file):
|
||||
@@ -667,8 +743,13 @@ class Commands:
|
||||
)
|
||||
continue
|
||||
|
||||
if self.coder.repo and self.coder.repo.git_ignored_file(matched_file):
|
||||
self.io.tool_error(f"Can't add {matched_file} which is in gitignore")
|
||||
continue
|
||||
|
||||
if abs_file_path in self.coder.abs_fnames:
|
||||
self.io.tool_warning(f"{matched_file} is already in the chat")
|
||||
self.io.tool_error(f"{matched_file} is already in the chat as an editable file")
|
||||
continue
|
||||
elif abs_file_path in self.coder.abs_read_only_fnames:
|
||||
if self.coder.repo and self.coder.repo.path_in_repo(matched_file):
|
||||
self.coder.abs_read_only_fnames.remove(abs_file_path)
|
||||
@@ -681,7 +762,9 @@ class Commands:
|
||||
f"Cannot add {matched_file} as it's not part of the repository"
|
||||
)
|
||||
else:
|
||||
if is_image_file(matched_file) and not self.coder.main_model.accepts_images:
|
||||
if is_image_file(matched_file) and not self.coder.main_model.info.get(
|
||||
"supports_vision"
|
||||
):
|
||||
self.io.tool_error(
|
||||
f"Cannot add image file {matched_file} as the"
|
||||
f" {self.coder.main_model.name} does not support images."
|
||||
@@ -735,7 +818,7 @@ class Commands:
|
||||
self.io.tool_output(f"Removed {matched_file} from the chat")
|
||||
|
||||
def cmd_git(self, args):
|
||||
"Run a git command"
|
||||
"Run a git command (output excluded from chat)"
|
||||
combined_output = None
|
||||
try:
|
||||
args = "git " + args
|
||||
@@ -894,6 +977,7 @@ class Commands:
|
||||
self.basic_help()
|
||||
return
|
||||
|
||||
self.coder.event("interactive help")
|
||||
from aider.coders import Coder
|
||||
|
||||
if not self.help:
|
||||
@@ -945,6 +1029,10 @@ class Commands:
|
||||
"Ask for changes to your code"
|
||||
return self._generic_chat_command(args, self.coder.main_model.edit_format)
|
||||
|
||||
def cmd_architect(self, args):
|
||||
"Enter architect mode to discuss high-level design and architecture"
|
||||
return self._generic_chat_command(args, "architect")
|
||||
|
||||
def _generic_chat_command(self, args, edit_format):
|
||||
if not args.strip():
|
||||
self.io.tool_error(f"Please provide a question or topic for the {edit_format} chat.")
|
||||
@@ -997,7 +1085,7 @@ class Commands:
|
||||
self.io.tool_error("To use /voice you must provide an OpenAI API key.")
|
||||
return
|
||||
try:
|
||||
self.voice = voice.Voice()
|
||||
self.voice = voice.Voice(audio_format=self.args.voice_format)
|
||||
except voice.SoundDeviceError:
|
||||
self.io.tool_error(
|
||||
"Unable to import `sounddevice` and/or `soundfile`, is portaudio installed?"
|
||||
@@ -1035,8 +1123,9 @@ class Commands:
|
||||
|
||||
return text
|
||||
|
||||
def cmd_clipboard(self, args):
|
||||
"Add image/text from the clipboard to the chat (optionally provide a name for the image)"
|
||||
def cmd_paste(self, args):
|
||||
"""Paste image/text from the clipboard into the chat.\
|
||||
Optionally provide a name for the image."""
|
||||
try:
|
||||
# Check for image first
|
||||
image = ImageGrab.grabclipboard()
|
||||
@@ -1091,27 +1180,50 @@ class Commands:
|
||||
return
|
||||
|
||||
filenames = parse_quoted_filenames(args)
|
||||
for word in filenames:
|
||||
# Expand the home directory if the path starts with "~"
|
||||
expanded_path = os.path.expanduser(word)
|
||||
abs_path = self.coder.abs_root_path(expanded_path)
|
||||
all_paths = []
|
||||
|
||||
if not os.path.exists(abs_path):
|
||||
self.io.tool_error(f"Path not found: {abs_path}")
|
||||
continue
|
||||
# First collect all expanded paths
|
||||
for pattern in filenames:
|
||||
expanded_pattern = expanduser(pattern)
|
||||
if os.path.isabs(expanded_pattern):
|
||||
# For absolute paths, glob it
|
||||
matches = list(glob.glob(expanded_pattern))
|
||||
else:
|
||||
# For relative paths and globs, use glob from the root directory
|
||||
matches = list(Path(self.coder.root).glob(expanded_pattern))
|
||||
|
||||
if not matches:
|
||||
self.io.tool_error(f"No matches found for: {pattern}")
|
||||
else:
|
||||
all_paths.extend(matches)
|
||||
|
||||
# Then process them in sorted order
|
||||
for path in sorted(all_paths):
|
||||
abs_path = self.coder.abs_root_path(path)
|
||||
if os.path.isfile(abs_path):
|
||||
self._add_read_only_file(abs_path, word)
|
||||
self._add_read_only_file(abs_path, path)
|
||||
elif os.path.isdir(abs_path):
|
||||
self._add_read_only_directory(abs_path, word)
|
||||
self._add_read_only_directory(abs_path, path)
|
||||
else:
|
||||
self.io.tool_error(f"Not a file or directory: {abs_path}")
|
||||
|
||||
def _add_read_only_file(self, abs_path, original_name):
|
||||
if abs_path in self.coder.abs_fnames:
|
||||
self.io.tool_error(f"{original_name} is already in the chat as an editable file")
|
||||
elif abs_path in self.coder.abs_read_only_fnames:
|
||||
if is_image_file(original_name) and not self.coder.main_model.info.get("supports_vision"):
|
||||
self.io.tool_error(
|
||||
f"Cannot add image file {original_name} as the"
|
||||
f" {self.coder.main_model.name} does not support images."
|
||||
)
|
||||
return
|
||||
|
||||
if abs_path in self.coder.abs_read_only_fnames:
|
||||
self.io.tool_error(f"{original_name} is already in the chat as a read-only file")
|
||||
return
|
||||
elif abs_path in self.coder.abs_fnames:
|
||||
self.coder.abs_fnames.remove(abs_path)
|
||||
self.coder.abs_read_only_fnames.add(abs_path)
|
||||
self.io.tool_output(
|
||||
f"Moved {original_name} from editable to read-only files in the chat"
|
||||
)
|
||||
else:
|
||||
self.coder.abs_read_only_fnames.add(abs_path)
|
||||
self.io.tool_output(f"Added {original_name} to read-only files.")
|
||||
@@ -1152,7 +1264,93 @@ class Commands:
|
||||
def cmd_settings(self, args):
|
||||
"Print out the current settings"
|
||||
settings = format_settings(self.parser, self.args)
|
||||
self.io.tool_output(settings)
|
||||
announcements = "\n".join(self.coder.get_announcements())
|
||||
output = f"{announcements}\n{settings}"
|
||||
self.io.tool_output(output)
|
||||
|
||||
def completions_raw_load(self, document, complete_event):
|
||||
return self.completions_raw_read_only(document, complete_event)
|
||||
|
||||
def cmd_load(self, args):
|
||||
"Load and execute commands from a file"
|
||||
if not args.strip():
|
||||
self.io.tool_error("Please provide a filename containing commands to load.")
|
||||
return
|
||||
|
||||
try:
|
||||
with open(args.strip(), "r", encoding=self.io.encoding, errors="replace") as f:
|
||||
commands = f.readlines()
|
||||
except FileNotFoundError:
|
||||
self.io.tool_error(f"File not found: {args}")
|
||||
return
|
||||
except Exception as e:
|
||||
self.io.tool_error(f"Error reading file: {e}")
|
||||
return
|
||||
|
||||
for cmd in commands:
|
||||
cmd = cmd.strip()
|
||||
if not cmd or cmd.startswith("#"):
|
||||
continue
|
||||
|
||||
self.io.tool_output(f"\nExecuting: {cmd}")
|
||||
self.run(cmd)
|
||||
|
||||
def completions_raw_save(self, document, complete_event):
|
||||
return self.completions_raw_read_only(document, complete_event)
|
||||
|
||||
def cmd_save(self, args):
|
||||
"Save commands to a file that can reconstruct the current chat session's files"
|
||||
if not args.strip():
|
||||
self.io.tool_error("Please provide a filename to save the commands to.")
|
||||
return
|
||||
|
||||
try:
|
||||
with open(args.strip(), "w", encoding=self.io.encoding) as f:
|
||||
f.write("/drop\n")
|
||||
# Write commands to add editable files
|
||||
for fname in sorted(self.coder.abs_fnames):
|
||||
rel_fname = self.coder.get_rel_fname(fname)
|
||||
f.write(f"/add {rel_fname}\n")
|
||||
|
||||
# Write commands to add read-only files
|
||||
for fname in sorted(self.coder.abs_read_only_fnames):
|
||||
# Use absolute path for files outside repo root, relative path for files inside
|
||||
if Path(fname).is_relative_to(self.coder.root):
|
||||
rel_fname = self.coder.get_rel_fname(fname)
|
||||
f.write(f"/read-only {rel_fname}\n")
|
||||
else:
|
||||
f.write(f"/read-only {fname}\n")
|
||||
|
||||
self.io.tool_output(f"Saved commands to {args.strip()}")
|
||||
except Exception as e:
|
||||
self.io.tool_error(f"Error saving commands to file: {e}")
|
||||
|
||||
def cmd_copy(self, args):
|
||||
"Copy the last assistant message to the clipboard"
|
||||
all_messages = self.coder.done_messages + self.coder.cur_messages
|
||||
assistant_messages = [msg for msg in reversed(all_messages) if msg["role"] == "assistant"]
|
||||
|
||||
if not assistant_messages:
|
||||
self.io.tool_error("No assistant messages found to copy.")
|
||||
return
|
||||
|
||||
last_assistant_message = assistant_messages[0]["content"]
|
||||
|
||||
try:
|
||||
pyperclip.copy(last_assistant_message)
|
||||
preview = (
|
||||
last_assistant_message[:50] + "..."
|
||||
if len(last_assistant_message) > 50
|
||||
else last_assistant_message
|
||||
)
|
||||
self.io.tool_output(f"Copied last assistant message to clipboard. Preview: {preview}")
|
||||
except pyperclip.PyperclipException as e:
|
||||
self.io.tool_error(f"Failed to copy to clipboard: {str(e)}")
|
||||
self.io.tool_output(
|
||||
"You may need to install xclip or xsel on Linux, or pbcopy on macOS."
|
||||
)
|
||||
except Exception as e:
|
||||
self.io.tool_error(f"An unexpected error occurred while copying to clipboard: {str(e)}")
|
||||
|
||||
def cmd_report(self, args):
|
||||
"Report a problem by opening a GitHub Issue"
|
||||
|
||||
76
aider/exceptions.py
Normal file
76
aider/exceptions.py
Normal file
@@ -0,0 +1,76 @@
|
||||
from dataclasses import dataclass
|
||||
|
||||
|
||||
@dataclass
|
||||
class ExInfo:
|
||||
name: str
|
||||
retry: bool
|
||||
description: str
|
||||
|
||||
|
||||
EXCEPTIONS = [
|
||||
ExInfo("APIConnectionError", True, None),
|
||||
ExInfo("APIError", True, None),
|
||||
ExInfo("APIResponseValidationError", True, None),
|
||||
ExInfo(
|
||||
"AuthenticationError",
|
||||
False,
|
||||
"The API provider is not able to authenticate you. Check your API key.",
|
||||
),
|
||||
ExInfo("AzureOpenAIError", True, None),
|
||||
ExInfo("BadRequestError", False, None),
|
||||
ExInfo("BudgetExceededError", True, None),
|
||||
ExInfo(
|
||||
"ContentPolicyViolationError",
|
||||
True,
|
||||
"The API provider has refused the request due to a safety policy about the content.",
|
||||
),
|
||||
ExInfo("ContextWindowExceededError", False, None), # special case handled in base_coder
|
||||
ExInfo("InternalServerError", True, "The API provider's servers are down or overloaded."),
|
||||
ExInfo("InvalidRequestError", True, None),
|
||||
ExInfo("JSONSchemaValidationError", True, None),
|
||||
ExInfo("NotFoundError", False, None),
|
||||
ExInfo("OpenAIError", True, None),
|
||||
ExInfo(
|
||||
"RateLimitError",
|
||||
True,
|
||||
"The API provider has rate limited you. Try again later or check your quotas.",
|
||||
),
|
||||
ExInfo("RouterRateLimitError", True, None),
|
||||
ExInfo("ServiceUnavailableError", True, "The API provider's servers are down or overloaded."),
|
||||
ExInfo("UnprocessableEntityError", True, None),
|
||||
ExInfo("UnsupportedParamsError", True, None),
|
||||
]
|
||||
|
||||
|
||||
class LiteLLMExceptions:
|
||||
exceptions = dict()
|
||||
|
||||
def __init__(self):
|
||||
self._load()
|
||||
|
||||
def _load(self, strict=False):
|
||||
import litellm
|
||||
|
||||
for var in dir(litellm):
|
||||
if not var.endswith("Error"):
|
||||
continue
|
||||
|
||||
ex_info = None
|
||||
for exi in EXCEPTIONS:
|
||||
if var == exi.name:
|
||||
ex_info = exi
|
||||
break
|
||||
|
||||
if strict and not ex_info:
|
||||
raise ValueError(f"{var} is in litellm but not in aider's exceptions list")
|
||||
|
||||
ex = getattr(litellm, var)
|
||||
self.exceptions[ex] = ex_info
|
||||
|
||||
def exceptions_tuple(self):
|
||||
return tuple(self.exceptions)
|
||||
|
||||
def get_ex_info(self, ex):
|
||||
"""Return the ExInfo for a given exception instance"""
|
||||
return self.exceptions.get(ex.__class__, ExInfo(None, None, None))
|
||||
@@ -160,7 +160,7 @@ class GUI:
|
||||
|
||||
st.warning(
|
||||
"This browser version of aider is experimental. Please share feedback in [GitHub"
|
||||
" issues](https://github.com/paul-gauthier/aider/issues)."
|
||||
" issues](https://github.com/Aider-AI/aider/issues)."
|
||||
)
|
||||
|
||||
def do_settings_tab(self):
|
||||
@@ -528,7 +528,7 @@ def gui_main():
|
||||
page_icon=urls.favicon,
|
||||
menu_items={
|
||||
"Get Help": urls.website,
|
||||
"Report a bug": "https://github.com/paul-gauthier/aider/issues",
|
||||
"Report a bug": "https://github.com/Aider-AI/aider/issues",
|
||||
"About": "# Aider\nAI pair programming in your browser.",
|
||||
},
|
||||
)
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
import warnings
|
||||
from pathlib import Path
|
||||
|
||||
@@ -38,24 +40,45 @@ def get_package_files():
|
||||
|
||||
|
||||
def fname_to_url(filepath):
|
||||
website = "website/"
|
||||
index = "/index.md"
|
||||
website = "website"
|
||||
index = "index.md"
|
||||
md = ".md"
|
||||
|
||||
docid = ""
|
||||
if filepath.startswith("website/_includes/"):
|
||||
pass
|
||||
elif filepath.startswith(website):
|
||||
docid = filepath[len(website) :]
|
||||
# Convert backslashes to forward slashes for consistency
|
||||
filepath = filepath.replace("\\", "/")
|
||||
|
||||
if filepath.endswith(index):
|
||||
filepath = filepath[: -len(index)] + "/"
|
||||
elif filepath.endswith(md):
|
||||
filepath = filepath[: -len(md)] + ".html"
|
||||
# Convert to Path object for easier manipulation
|
||||
path = Path(filepath)
|
||||
|
||||
docid = "https://aider.chat/" + filepath
|
||||
# Split the path into parts
|
||||
parts = path.parts
|
||||
|
||||
return docid
|
||||
# Find the 'website' part in the path
|
||||
try:
|
||||
website_index = [p.lower() for p in parts].index(website.lower())
|
||||
except ValueError:
|
||||
return "" # 'website' not found in the path
|
||||
|
||||
# Extract the part of the path starting from 'website'
|
||||
relevant_parts = parts[website_index + 1 :]
|
||||
|
||||
# Handle _includes directory
|
||||
if relevant_parts and relevant_parts[0].lower() == "_includes":
|
||||
return ""
|
||||
|
||||
# Join the remaining parts
|
||||
url_path = "/".join(relevant_parts)
|
||||
|
||||
# Handle index.md and other .md files
|
||||
if url_path.lower().endswith(index.lower()):
|
||||
url_path = url_path[: -len(index)]
|
||||
elif url_path.lower().endswith(md.lower()):
|
||||
url_path = url_path[: -len(md)] + ".html"
|
||||
|
||||
# Ensure the URL starts and ends with '/'
|
||||
url_path = url_path.strip("/")
|
||||
|
||||
return f"https://aider.chat/{url_path}"
|
||||
|
||||
|
||||
def get_index():
|
||||
@@ -69,12 +92,17 @@ def get_index():
|
||||
|
||||
dname = Path.home() / ".aider" / "caches" / ("help." + __version__)
|
||||
|
||||
if dname.exists():
|
||||
storage_context = StorageContext.from_defaults(
|
||||
persist_dir=dname,
|
||||
)
|
||||
index = load_index_from_storage(storage_context)
|
||||
else:
|
||||
index = None
|
||||
try:
|
||||
if dname.exists():
|
||||
storage_context = StorageContext.from_defaults(
|
||||
persist_dir=dname,
|
||||
)
|
||||
index = load_index_from_storage(storage_context)
|
||||
except (OSError, json.JSONDecodeError):
|
||||
shutil.rmtree(dname)
|
||||
|
||||
if index is None:
|
||||
parser = MarkdownNodeParser()
|
||||
|
||||
nodes = []
|
||||
|
||||
@@ -109,7 +109,7 @@ class ChatSummary:
|
||||
for model in self.models:
|
||||
try:
|
||||
summary = simple_send_with_retries(
|
||||
model.name, summarize_messages, extra_headers=model.extra_headers
|
||||
model.name, summarize_messages, extra_params=model.extra_params
|
||||
)
|
||||
if summary is not None:
|
||||
summary = prompts.summary_prefix + summary
|
||||
|
||||
240
aider/io.py
240
aider/io.py
@@ -1,11 +1,14 @@
|
||||
import base64
|
||||
import os
|
||||
import webbrowser
|
||||
from collections import defaultdict
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime
|
||||
from io import StringIO
|
||||
from pathlib import Path
|
||||
|
||||
from prompt_toolkit.completion import Completer, Completion, ThreadedCompleter
|
||||
from prompt_toolkit.cursor_shapes import ModalCursorShapeConfig
|
||||
from prompt_toolkit.enums import EditingMode
|
||||
from prompt_toolkit.history import FileHistory
|
||||
from prompt_toolkit.key_binding import KeyBindings
|
||||
@@ -14,10 +17,12 @@ from prompt_toolkit.shortcuts import CompleteStyle, PromptSession
|
||||
from prompt_toolkit.styles import Style
|
||||
from pygments.lexers import MarkdownLexer, guess_lexer_for_filename
|
||||
from pygments.token import Token
|
||||
from rich.columns import Columns
|
||||
from rich.console import Console
|
||||
from rich.markdown import Markdown
|
||||
from rich.style import Style as RichStyle
|
||||
from rich.text import Text
|
||||
from rich.markdown import Markdown
|
||||
|
||||
from aider.mdstream import MarkdownStream
|
||||
|
||||
from .dump import dump # noqa: F401
|
||||
@@ -91,17 +96,16 @@ class AutoCompleter(Completer):
|
||||
(token[1], f"`{token[1]}`") for token in tokens if token[0] in Token.Name
|
||||
)
|
||||
|
||||
def get_command_completions(self, text, words):
|
||||
candidates = []
|
||||
def get_command_completions(self, document, complete_event, text, words):
|
||||
if len(words) == 1 and not text[-1].isspace():
|
||||
partial = words[0].lower()
|
||||
candidates = [cmd for cmd in self.command_names if cmd.startswith(partial)]
|
||||
return candidates
|
||||
for candidate in sorted(candidates):
|
||||
yield Completion(candidate, start_position=-len(words[-1]))
|
||||
return
|
||||
|
||||
if len(words) <= 1:
|
||||
return []
|
||||
if text[-1].isspace():
|
||||
return []
|
||||
if len(words) <= 1 or text[-1].isspace():
|
||||
return
|
||||
|
||||
cmd = words[0]
|
||||
partial = words[-1].lower()
|
||||
@@ -112,6 +116,11 @@ class AutoCompleter(Completer):
|
||||
elif cmd not in matches:
|
||||
return
|
||||
|
||||
raw_completer = self.commands.get_raw_completions(cmd)
|
||||
if raw_completer:
|
||||
yield from raw_completer(document, complete_event)
|
||||
return
|
||||
|
||||
if cmd not in self.command_completions:
|
||||
candidates = self.commands.get_completions(cmd)
|
||||
self.command_completions[cmd] = candidates
|
||||
@@ -122,7 +131,8 @@ class AutoCompleter(Completer):
|
||||
return
|
||||
|
||||
candidates = [word for word in candidates if partial in word.lower()]
|
||||
return candidates
|
||||
for candidate in sorted(candidates):
|
||||
yield Completion(candidate, start_position=-len(words[-1]))
|
||||
|
||||
def get_completions(self, document, complete_event):
|
||||
self.tokenize()
|
||||
@@ -137,11 +147,8 @@ class AutoCompleter(Completer):
|
||||
return
|
||||
|
||||
if text[0] == "/":
|
||||
candidates = self.get_command_completions(text, words)
|
||||
if candidates is not None:
|
||||
for candidate in sorted(candidates):
|
||||
yield Completion(candidate, start_position=-len(words[-1]))
|
||||
return
|
||||
yield from self.get_command_completions(document, complete_event, text, words)
|
||||
return
|
||||
|
||||
candidates = self.words
|
||||
candidates.update(set(self.fname_to_rel_fnames))
|
||||
@@ -179,12 +186,18 @@ class InputOutput:
|
||||
tool_error_color="red",
|
||||
tool_warning_color="#FFA500",
|
||||
assistant_output_color="blue",
|
||||
completion_menu_color=None,
|
||||
completion_menu_bg_color=None,
|
||||
completion_menu_current_color=None,
|
||||
completion_menu_current_bg_color=None,
|
||||
code_theme="default",
|
||||
encoding="utf-8",
|
||||
dry_run=False,
|
||||
llm_history_file=None,
|
||||
editingmode=EditingMode.EMACS,
|
||||
fancy_input=True,
|
||||
):
|
||||
self.never_prompts = set()
|
||||
self.editingmode = editingmode
|
||||
no_color = os.environ.get("NO_COLOR")
|
||||
if no_color is not None and no_color != "":
|
||||
@@ -195,6 +208,11 @@ class InputOutput:
|
||||
self.tool_error_color = tool_error_color if pretty else None
|
||||
self.tool_warning_color = tool_warning_color if pretty else None
|
||||
self.assistant_output_color = assistant_output_color
|
||||
self.completion_menu_color = completion_menu_color if pretty else None
|
||||
self.completion_menu_bg_color = completion_menu_bg_color if pretty else None
|
||||
self.completion_menu_current_color = completion_menu_current_color if pretty else None
|
||||
self.completion_menu_current_bg_color = completion_menu_current_bg_color if pretty else None
|
||||
|
||||
self.code_theme = code_theme
|
||||
|
||||
self.input = input
|
||||
@@ -220,7 +238,7 @@ class InputOutput:
|
||||
self.append_chat_history(f"\n# aider chat started at {current_time}\n\n")
|
||||
|
||||
self.prompt_session = None
|
||||
if self.pretty:
|
||||
if fancy_input:
|
||||
# Initialize PromptSession
|
||||
session_kwargs = {
|
||||
"input": self.input,
|
||||
@@ -228,6 +246,8 @@ class InputOutput:
|
||||
"lexer": PygmentsLexer(MarkdownLexer),
|
||||
"editing_mode": self.editingmode,
|
||||
}
|
||||
if self.editingmode == EditingMode.VI:
|
||||
session_kwargs["cursor"] = ModalCursorShapeConfig()
|
||||
if self.input_history_file is not None:
|
||||
session_kwargs["history"] = FileHistory(self.input_history_file)
|
||||
try:
|
||||
@@ -239,6 +259,41 @@ class InputOutput:
|
||||
else:
|
||||
self.console = Console(force_terminal=False, no_color=True) # non-pretty
|
||||
|
||||
def _get_style(self):
|
||||
style_dict = {}
|
||||
if not self.pretty:
|
||||
return Style.from_dict(style_dict)
|
||||
|
||||
if self.user_input_color:
|
||||
style_dict.setdefault("", self.user_input_color)
|
||||
style_dict.update(
|
||||
{
|
||||
"pygments.literal.string": f"bold italic {self.user_input_color}",
|
||||
}
|
||||
)
|
||||
|
||||
# Conditionally add 'completion-menu' style
|
||||
completion_menu_style = []
|
||||
if self.completion_menu_bg_color:
|
||||
completion_menu_style.append(f"bg:{self.completion_menu_bg_color}")
|
||||
if self.completion_menu_color:
|
||||
completion_menu_style.append(self.completion_menu_color)
|
||||
if completion_menu_style:
|
||||
style_dict["completion-menu"] = " ".join(completion_menu_style)
|
||||
|
||||
# Conditionally add 'completion-menu.completion.current' style
|
||||
completion_menu_current_style = []
|
||||
if self.completion_menu_current_bg_color:
|
||||
completion_menu_current_style.append(f"bg:{self.completion_menu_current_bg_color}")
|
||||
if self.completion_menu_current_color:
|
||||
completion_menu_current_style.append(self.completion_menu_current_color)
|
||||
if completion_menu_current_style:
|
||||
style_dict["completion-menu.completion.current"] = " ".join(
|
||||
completion_menu_current_style
|
||||
)
|
||||
|
||||
return Style.from_dict(style_dict)
|
||||
|
||||
def read_image(self, filename):
|
||||
try:
|
||||
with open(str(filename), "rb") as image_file:
|
||||
@@ -308,7 +363,10 @@ class InputOutput:
|
||||
rel_fnames = list(rel_fnames)
|
||||
show = ""
|
||||
if rel_fnames:
|
||||
show = " ".join(rel_fnames) + "\n"
|
||||
rel_read_only_fnames = [
|
||||
get_rel_fname(fname, root) for fname in (abs_read_only_fnames or [])
|
||||
]
|
||||
show = self.format_files_for_input(rel_fnames, rel_read_only_fnames)
|
||||
if edit_format:
|
||||
show += edit_format
|
||||
show += "> "
|
||||
@@ -316,15 +374,7 @@ class InputOutput:
|
||||
inp = ""
|
||||
multiline_input = False
|
||||
|
||||
if self.user_input_color and self.pretty:
|
||||
style = Style.from_dict(
|
||||
{
|
||||
"": self.user_input_color,
|
||||
"pygments.literal.string": f"bold italic {self.user_input_color}",
|
||||
}
|
||||
)
|
||||
else:
|
||||
style = None
|
||||
style = self._get_style()
|
||||
|
||||
completer_instance = ThreadedCompleter(
|
||||
AutoCompleter(
|
||||
@@ -339,6 +389,11 @@ class InputOutput:
|
||||
|
||||
kb = KeyBindings()
|
||||
|
||||
@kb.add("c-space")
|
||||
def _(event):
|
||||
"Ignore Ctrl when pressing space bar"
|
||||
event.current_buffer.insert_text(" ")
|
||||
|
||||
@kb.add("escape", "c-m", eager=True)
|
||||
def _(event):
|
||||
event.current_buffer.insert_text("\n")
|
||||
@@ -430,13 +485,35 @@ class InputOutput:
|
||||
hist = "\n" + content.strip() + "\n\n"
|
||||
self.append_chat_history(hist)
|
||||
|
||||
def offer_url(self, url, prompt="Open URL for more info?"):
|
||||
"""Offer to open a URL in the browser, returns True if opened."""
|
||||
if url in self.never_prompts:
|
||||
return False
|
||||
if self.confirm_ask(prompt, subject=url, allow_never=True):
|
||||
webbrowser.open(url)
|
||||
return True
|
||||
return False
|
||||
|
||||
def confirm_ask(
|
||||
self, question, default="y", subject=None, explicit_yes_required=False, group=None
|
||||
self,
|
||||
question,
|
||||
default="y",
|
||||
subject=None,
|
||||
explicit_yes_required=False,
|
||||
group=None,
|
||||
allow_never=False,
|
||||
):
|
||||
self.num_user_asks += 1
|
||||
|
||||
question_id = (question, subject)
|
||||
|
||||
if question_id in self.never_prompts:
|
||||
return False
|
||||
|
||||
if group and not group.show_group:
|
||||
group = None
|
||||
if group:
|
||||
allow_never = True
|
||||
|
||||
valid_responses = ["yes", "no"]
|
||||
options = " (Y)es/(N)o"
|
||||
@@ -446,6 +523,10 @@ class InputOutput:
|
||||
valid_responses.append("all")
|
||||
options += "/(S)kip all"
|
||||
valid_responses.append("skip")
|
||||
if allow_never:
|
||||
options += "/(D)on't ask again"
|
||||
valid_responses.append("don't")
|
||||
|
||||
question += options + " [Yes]: "
|
||||
|
||||
if subject:
|
||||
@@ -459,10 +540,7 @@ class InputOutput:
|
||||
else:
|
||||
self.tool_output(subject, bold=True)
|
||||
|
||||
if self.pretty and self.user_input_color:
|
||||
style = {"": self.user_input_color}
|
||||
else:
|
||||
style = dict()
|
||||
style = self._get_style()
|
||||
|
||||
def is_valid_response(text):
|
||||
if not text:
|
||||
@@ -481,7 +559,7 @@ class InputOutput:
|
||||
if self.prompt_session:
|
||||
res = self.prompt_session.prompt(
|
||||
question,
|
||||
style=Style.from_dict(style),
|
||||
style=style,
|
||||
)
|
||||
else:
|
||||
res = input(question)
|
||||
@@ -499,6 +577,12 @@ class InputOutput:
|
||||
|
||||
res = res.lower()[0]
|
||||
|
||||
if res == "d" and allow_never:
|
||||
self.never_prompts.add(question_id)
|
||||
hist = f"{question.strip()} {res}"
|
||||
self.append_chat_history(hist, linebreak=True, blockquote=True)
|
||||
return False
|
||||
|
||||
if explicit_yes_required:
|
||||
is_yes = res == "y"
|
||||
else:
|
||||
@@ -525,10 +609,7 @@ class InputOutput:
|
||||
self.tool_output()
|
||||
self.tool_output(subject, bold=True)
|
||||
|
||||
if self.pretty and self.user_input_color:
|
||||
style = Style.from_dict({"": self.user_input_color})
|
||||
else:
|
||||
style = None
|
||||
style = self._get_style()
|
||||
|
||||
if self.yes is True:
|
||||
res = "yes"
|
||||
@@ -586,27 +667,30 @@ class InputOutput:
|
||||
style = RichStyle(**style)
|
||||
self.console.print(*messages, style=style)
|
||||
|
||||
def assistant_output(self, message, stream=False):
|
||||
mdStream = None
|
||||
def get_assistant_mdstream(self):
|
||||
mdargs = dict(style=self.assistant_output_color, code_theme=self.code_theme)
|
||||
mdStream = MarkdownStream(mdargs=mdargs)
|
||||
return mdStream
|
||||
|
||||
def assistant_output(self, message, pretty=None):
|
||||
show_resp = message
|
||||
|
||||
if self.pretty:
|
||||
if stream:
|
||||
mdargs = dict(style=self.assistant_output_color, code_theme=self.code_theme)
|
||||
mdStream = MarkdownStream(mdargs=mdargs)
|
||||
else:
|
||||
show_resp = Markdown(
|
||||
message, style=self.assistant_output_color, code_theme=self.code_theme
|
||||
)
|
||||
|
||||
# Coder will force pretty off if fence is not triple-backticks
|
||||
if pretty is None:
|
||||
pretty = self.pretty
|
||||
|
||||
if pretty:
|
||||
show_resp = Markdown(
|
||||
message, style=self.assistant_output_color, code_theme=self.code_theme
|
||||
)
|
||||
else:
|
||||
show_resp = Text(message or "<no response>")
|
||||
|
||||
self.console.print(show_resp)
|
||||
return mdStream
|
||||
|
||||
|
||||
def print(self, message=""):
|
||||
print(message)
|
||||
|
||||
|
||||
def append_chat_history(self, text, linebreak=False, blockquote=False, strip=True):
|
||||
if blockquote:
|
||||
if strip:
|
||||
@@ -620,11 +704,57 @@ class InputOutput:
|
||||
text += "\n"
|
||||
if self.chat_history_file is not None:
|
||||
try:
|
||||
with self.chat_history_file.open("a", encoding=self.encoding) as f:
|
||||
with self.chat_history_file.open("a", encoding=self.encoding, errors="ignore") as f:
|
||||
f.write(text)
|
||||
except (PermissionError, OSError):
|
||||
self.tool_error(
|
||||
f"Warning: Unable to write to chat history file {self.chat_history_file}."
|
||||
" Permission denied."
|
||||
)
|
||||
except (PermissionError, OSError) as err:
|
||||
print(f"Warning: Unable to write to chat history file {self.chat_history_file}.")
|
||||
print(err)
|
||||
self.chat_history_file = None # Disable further attempts to write
|
||||
|
||||
def format_files_for_input(self, rel_fnames, rel_read_only_fnames):
|
||||
if not self.pretty:
|
||||
read_only_files = []
|
||||
for full_path in sorted(rel_read_only_fnames or []):
|
||||
read_only_files.append(f"{full_path} (read only)")
|
||||
|
||||
editable_files = []
|
||||
for full_path in sorted(rel_fnames):
|
||||
if full_path in rel_read_only_fnames:
|
||||
continue
|
||||
editable_files.append(f"{full_path}")
|
||||
|
||||
return "\n".join(read_only_files + editable_files) + "\n"
|
||||
|
||||
output = StringIO()
|
||||
console = Console(file=output, force_terminal=False)
|
||||
|
||||
read_only_files = sorted(rel_read_only_fnames or [])
|
||||
editable_files = [f for f in sorted(rel_fnames) if f not in rel_read_only_fnames]
|
||||
|
||||
if read_only_files:
|
||||
files_with_label = ["Readonly:"] + read_only_files
|
||||
read_only_output = StringIO()
|
||||
Console(file=read_only_output, force_terminal=False).print(Columns(files_with_label))
|
||||
read_only_lines = read_only_output.getvalue().splitlines()
|
||||
console.print(Columns(files_with_label))
|
||||
|
||||
if editable_files:
|
||||
files_with_label = editable_files
|
||||
if read_only_files:
|
||||
files_with_label = ["Editable:"] + editable_files
|
||||
editable_output = StringIO()
|
||||
Console(file=editable_output, force_terminal=False).print(Columns(files_with_label))
|
||||
editable_lines = editable_output.getvalue().splitlines()
|
||||
|
||||
if len(read_only_lines) > 1 or len(editable_lines) > 1:
|
||||
console.print()
|
||||
console.print(Columns(files_with_label))
|
||||
|
||||
return output.getvalue()
|
||||
|
||||
|
||||
def get_rel_fname(fname, root):
|
||||
try:
|
||||
return os.path.relpath(fname, root)
|
||||
except ValueError:
|
||||
return fname
|
||||
|
||||
@@ -83,7 +83,11 @@ class Linter:
|
||||
|
||||
def lint(self, fname, cmd=None):
|
||||
rel_fname = self.get_rel_fname(fname)
|
||||
code = Path(fname).read_text(encoding=self.encoding, errors="replace")
|
||||
try:
|
||||
code = Path(fname).read_text(encoding=self.encoding, errors="replace")
|
||||
except OSError as err:
|
||||
print(f"Unable to read {fname}: {err}")
|
||||
return
|
||||
|
||||
if cmd:
|
||||
cmd = cmd.strip()
|
||||
@@ -211,13 +215,18 @@ def basic_lint(fname, code):
|
||||
|
||||
try:
|
||||
parser = get_parser(lang)
|
||||
except OSError as err:
|
||||
except Exception as err:
|
||||
print(f"Unable to load parser: {err}")
|
||||
return
|
||||
|
||||
tree = parser.parse(bytes(code, "utf-8"))
|
||||
|
||||
errors = traverse_tree(tree.root_node)
|
||||
try:
|
||||
errors = traverse_tree(tree.root_node)
|
||||
except RecursionError:
|
||||
print(f"Unable to lint {fname} due to RecursionError")
|
||||
return
|
||||
|
||||
if not errors:
|
||||
return
|
||||
|
||||
|
||||
200
aider/main.py
200
aider/main.py
@@ -8,10 +8,12 @@ import traceback
|
||||
from pathlib import Path
|
||||
|
||||
import git
|
||||
import importlib_resources
|
||||
from dotenv import load_dotenv
|
||||
from prompt_toolkit.enums import EditingMode
|
||||
|
||||
from aider import __version__, models, utils
|
||||
from aider import __version__, models, urls, utils
|
||||
from aider.analytics import Analytics
|
||||
from aider.args import get_parser
|
||||
from aider.coders import Coder
|
||||
from aider.commands import Commands, SwitchCoder
|
||||
@@ -26,6 +28,23 @@ from aider.versioncheck import check_version, install_from_main_branch, install_
|
||||
from .dump import dump # noqa: F401
|
||||
|
||||
|
||||
def check_config_files_for_yes(config_files):
|
||||
found = False
|
||||
for config_file in config_files:
|
||||
if Path(config_file).exists():
|
||||
try:
|
||||
with open(config_file, "r") as f:
|
||||
for line in f:
|
||||
if line.strip().startswith("yes:"):
|
||||
print("Configuration error detected.")
|
||||
print(f"The file {config_file} contains a line starting with 'yes:'")
|
||||
print("Please replace 'yes:' with 'yes-always:' in this file.")
|
||||
found = True
|
||||
except Exception:
|
||||
pass
|
||||
return found
|
||||
|
||||
|
||||
def get_git_root():
|
||||
"""Try and guess the git repo, since the conf.yml can be at the repo root"""
|
||||
try:
|
||||
@@ -40,7 +59,7 @@ def guessed_wrong_repo(io, git_root, fnames, git_dname):
|
||||
|
||||
try:
|
||||
check_repo = Path(GitRepo(io, fnames, git_dname).root).resolve()
|
||||
except FileNotFoundError:
|
||||
except (OSError,) + ANY_GIT_ERROR:
|
||||
return
|
||||
|
||||
# we had no guess, rely on the "true" repo result
|
||||
@@ -68,15 +87,25 @@ def make_new_repo(git_root, io):
|
||||
|
||||
|
||||
def setup_git(git_root, io):
|
||||
try:
|
||||
cwd = Path.cwd()
|
||||
except OSError:
|
||||
cwd = None
|
||||
|
||||
repo = None
|
||||
|
||||
if git_root:
|
||||
repo = git.Repo(git_root)
|
||||
elif Path.cwd() == Path.home():
|
||||
try:
|
||||
repo = git.Repo(git_root)
|
||||
except ANY_GIT_ERROR:
|
||||
pass
|
||||
elif cwd == Path.home():
|
||||
io.tool_warning("You should probably run aider in a directory, not your home dir.")
|
||||
return
|
||||
elif io.confirm_ask("No git repo found, create one to track aider's changes (recommended)?"):
|
||||
git_root = str(Path.cwd().resolve())
|
||||
elif cwd and io.confirm_ask(
|
||||
"No git repo found, create one to track aider's changes (recommended)?"
|
||||
):
|
||||
git_root = str(cwd.resolve())
|
||||
repo = make_new_repo(git_root, io)
|
||||
|
||||
if not repo:
|
||||
@@ -114,32 +143,39 @@ def check_gitignore(git_root, io, ask=True):
|
||||
|
||||
try:
|
||||
repo = git.Repo(git_root)
|
||||
if repo.ignored(".aider"):
|
||||
if repo.ignored(".aider") and repo.ignored(".env"):
|
||||
return
|
||||
except ANY_GIT_ERROR:
|
||||
pass
|
||||
|
||||
pat = ".aider*"
|
||||
patterns = [".aider*", ".env"]
|
||||
patterns_to_add = []
|
||||
|
||||
gitignore_file = Path(git_root) / ".gitignore"
|
||||
if gitignore_file.exists():
|
||||
content = io.read_text(gitignore_file)
|
||||
if content is None:
|
||||
return
|
||||
if pat in content.splitlines():
|
||||
return
|
||||
existing_lines = content.splitlines()
|
||||
for pat in patterns:
|
||||
if pat not in existing_lines:
|
||||
patterns_to_add.append(pat)
|
||||
else:
|
||||
content = ""
|
||||
patterns_to_add = patterns
|
||||
|
||||
if ask and not io.confirm_ask(f"Add {pat} to .gitignore (recommended)?"):
|
||||
if not patterns_to_add:
|
||||
return
|
||||
|
||||
if ask and not io.confirm_ask(f"Add {', '.join(patterns_to_add)} to .gitignore (recommended)?"):
|
||||
return
|
||||
|
||||
if content and not content.endswith("\n"):
|
||||
content += "\n"
|
||||
content += pat + "\n"
|
||||
content += "\n".join(patterns_to_add) + "\n"
|
||||
io.write_text(gitignore_file, content)
|
||||
|
||||
io.tool_output(f"Added {pat} to .gitignore")
|
||||
io.tool_output(f"Added {', '.join(patterns_to_add)} to .gitignore")
|
||||
|
||||
|
||||
def check_streamlit_install(io):
|
||||
@@ -169,7 +205,10 @@ def launch_gui(args):
|
||||
"--server.runOnSave=false",
|
||||
]
|
||||
|
||||
if "-dev" in __version__:
|
||||
# https://github.com/Aider-AI/aider/issues/2193
|
||||
is_dev = "-dev" in str(__version__)
|
||||
|
||||
if is_dev:
|
||||
print("Watching for file changes.")
|
||||
else:
|
||||
st_args += [
|
||||
@@ -217,16 +256,23 @@ def parse_lint_cmds(lint_cmds, io):
|
||||
return res
|
||||
|
||||
|
||||
def generate_search_path_list(default_fname, git_root, command_line_file):
|
||||
def generate_search_path_list(default_file, git_root, command_line_file):
|
||||
files = []
|
||||
default_file = Path(default_fname)
|
||||
files.append(Path.home() / default_file) # homedir
|
||||
if git_root:
|
||||
files.append(Path(git_root) / default_file) # git root
|
||||
files.append(default_file.resolve())
|
||||
files.append(default_file)
|
||||
if command_line_file:
|
||||
files.append(command_line_file)
|
||||
files = [Path(fn).resolve() for fn in files]
|
||||
|
||||
resolved_files = []
|
||||
for fn in files:
|
||||
try:
|
||||
resolved_files.append(Path(fn).resolve())
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
files = resolved_files
|
||||
files.reverse()
|
||||
uniq = []
|
||||
for fn in files:
|
||||
@@ -266,7 +312,7 @@ def register_models(git_root, model_settings_fname, io, verbose=False):
|
||||
return None
|
||||
|
||||
|
||||
def load_dotenv_files(git_root, dotenv_fname):
|
||||
def load_dotenv_files(git_root, dotenv_fname, encoding="utf-8"):
|
||||
dotenv_files = generate_search_path_list(
|
||||
".env",
|
||||
git_root,
|
||||
@@ -274,9 +320,14 @@ def load_dotenv_files(git_root, dotenv_fname):
|
||||
)
|
||||
loaded = []
|
||||
for fname in dotenv_files:
|
||||
if Path(fname).exists():
|
||||
loaded.append(fname)
|
||||
load_dotenv(fname, override=True)
|
||||
try:
|
||||
if Path(fname).exists():
|
||||
load_dotenv(fname, override=True, encoding=encoding)
|
||||
loaded.append(fname)
|
||||
except OSError as e:
|
||||
print(f"OSError loading {fname}: {e}")
|
||||
except Exception as e:
|
||||
print(f"Error loading {fname}: {e}")
|
||||
return loaded
|
||||
|
||||
|
||||
@@ -285,6 +336,10 @@ def register_litellm_models(git_root, model_metadata_fname, io, verbose=False):
|
||||
".aider.model.metadata.json", git_root, model_metadata_fname
|
||||
)
|
||||
|
||||
# Add the resource file path
|
||||
resource_metadata = importlib_resources.files("aider.resources").joinpath("model-metadata.json")
|
||||
model_metatdata_files.append(str(resource_metadata))
|
||||
|
||||
try:
|
||||
model_metadata_files_loaded = models.register_litellm_models(model_metatdata_files)
|
||||
if len(model_metadata_files_loaded) > 0 and verbose:
|
||||
@@ -304,6 +359,7 @@ def sanity_check_repo(repo, io):
|
||||
io.tool_error("The git repo does not seem to have a working tree?")
|
||||
return False
|
||||
|
||||
bad_ver = False
|
||||
try:
|
||||
repo.get_tracked_files()
|
||||
if not repo.git_repo_error:
|
||||
@@ -320,7 +376,7 @@ def sanity_check_repo(repo, io):
|
||||
io.tool_error("Aider only works with git repos with version number 1 or 2.")
|
||||
io.tool_output("You may be able to convert your repo: git update-index --index-version=2")
|
||||
io.tool_output("Or run aider --no-git to proceed without using git.")
|
||||
io.tool_output("https://github.com/paul-gauthier/aider/issues/211")
|
||||
io.offer_url(urls.git_index_version, "Open documentation url for more info?")
|
||||
return False
|
||||
|
||||
io.tool_error("Unable to read git repository, it may be corrupt?")
|
||||
@@ -341,7 +397,12 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
|
||||
|
||||
conf_fname = Path(".aider.conf.yml")
|
||||
|
||||
default_config_files = [conf_fname.resolve()] # CWD
|
||||
default_config_files = []
|
||||
try:
|
||||
default_config_files += [conf_fname.resolve()] # CWD
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
if git_root:
|
||||
git_conf = Path(git_root) / conf_fname # git root
|
||||
if git_conf not in default_config_files:
|
||||
@@ -350,7 +411,13 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
|
||||
default_config_files = list(map(str, default_config_files))
|
||||
|
||||
parser = get_parser(default_config_files, git_root)
|
||||
args, unknown = parser.parse_known_args(argv)
|
||||
try:
|
||||
args, unknown = parser.parse_known_args(argv)
|
||||
except AttributeError as e:
|
||||
if all(word in str(e) for word in ["bool", "object", "has", "no", "attribute", "strip"]):
|
||||
if check_config_files_for_yes(default_config_files):
|
||||
return 1
|
||||
raise e
|
||||
|
||||
if args.verbose:
|
||||
print("Config files search order, if no --config:")
|
||||
@@ -361,19 +428,26 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
|
||||
default_config_files.reverse()
|
||||
|
||||
parser = get_parser(default_config_files, git_root)
|
||||
|
||||
args, unknown = parser.parse_known_args(argv)
|
||||
|
||||
# Load the .env file specified in the arguments
|
||||
loaded_dotenvs = load_dotenv_files(git_root, args.env_file)
|
||||
loaded_dotenvs = load_dotenv_files(git_root, args.env_file, args.encoding)
|
||||
|
||||
# Parse again to include any arguments that might have been defined in .env
|
||||
args = parser.parse_args(argv)
|
||||
|
||||
if args.analytics_disable:
|
||||
analytics = Analytics(permanently_disable=True)
|
||||
print("Analytics have been permanently disabled.")
|
||||
|
||||
if not args.verify_ssl:
|
||||
import httpx
|
||||
|
||||
os.environ["SSL_VERIFY"] = ""
|
||||
litellm._load_litellm()
|
||||
litellm._lazy_module.client_session = httpx.Client(verify=False)
|
||||
litellm._lazy_module.aclient_session = httpx.AsyncClient(verify=False)
|
||||
|
||||
if args.dark_mode:
|
||||
args.user_input_color = "#32FF32"
|
||||
@@ -389,28 +463,34 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
|
||||
args.assistant_output_color = "blue"
|
||||
args.code_theme = "default"
|
||||
|
||||
if return_coder and args.yes is None:
|
||||
args.yes = True
|
||||
if return_coder and args.yes_always is None:
|
||||
args.yes_always = True
|
||||
|
||||
editing_mode = EditingMode.VI if args.vim else EditingMode.EMACS
|
||||
|
||||
def get_io(pretty):
|
||||
return InputOutput(
|
||||
pretty,
|
||||
args.yes,
|
||||
args.yes_always,
|
||||
args.input_history_file,
|
||||
args.chat_history_file,
|
||||
input=input,
|
||||
output=output,
|
||||
user_input_color=args.user_input_color,
|
||||
tool_output_color=args.tool_output_color,
|
||||
tool_warning_color=args.tool_warning_color,
|
||||
tool_error_color=args.tool_error_color,
|
||||
completion_menu_color=args.completion_menu_color,
|
||||
completion_menu_bg_color=args.completion_menu_bg_color,
|
||||
completion_menu_current_color=args.completion_menu_current_color,
|
||||
completion_menu_current_bg_color=args.completion_menu_current_bg_color,
|
||||
assistant_output_color=args.assistant_output_color,
|
||||
code_theme=args.code_theme,
|
||||
dry_run=args.dry_run,
|
||||
encoding=args.encoding,
|
||||
llm_history_file=args.llm_history_file,
|
||||
editingmode=editing_mode,
|
||||
fancy_input=args.fancy_input,
|
||||
)
|
||||
|
||||
io = get_io(args.pretty)
|
||||
@@ -422,9 +502,35 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
|
||||
io = get_io(False)
|
||||
io.tool_warning("Terminal does not support pretty output (UnicodeDecodeError)")
|
||||
|
||||
analytics = Analytics(logfile=args.analytics_log, permanently_disable=args.analytics_disable)
|
||||
if args.analytics:
|
||||
if analytics.need_to_ask():
|
||||
io.tool_output(
|
||||
"Aider respects your privacy and never collects your code, chat messages, keys or"
|
||||
" personal info."
|
||||
)
|
||||
io.tool_output(f"For more info: {urls.analytics}")
|
||||
disable = not io.confirm_ask(
|
||||
"Allow collection of anonymous analytics to help improve aider?"
|
||||
)
|
||||
|
||||
analytics.asked_opt_in = True
|
||||
if disable:
|
||||
analytics.disable(permanently=True)
|
||||
io.tool_output("Analytics have been permanently disabled.")
|
||||
|
||||
analytics.save_data()
|
||||
io.tool_output()
|
||||
|
||||
# This is a no-op if the user has opted out
|
||||
analytics.enable()
|
||||
|
||||
analytics.event("launched")
|
||||
|
||||
if args.gui and not return_coder:
|
||||
if not check_streamlit_install(io):
|
||||
return
|
||||
analytics.event("gui session")
|
||||
launch_gui(argv)
|
||||
return
|
||||
|
||||
@@ -519,9 +625,14 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
|
||||
if not args.model:
|
||||
args.model = "gpt-4o-2024-08-06"
|
||||
if os.environ.get("ANTHROPIC_API_KEY"):
|
||||
args.model = "claude-3-5-sonnet-20240620"
|
||||
args.model = "claude-3-5-sonnet-20241022"
|
||||
|
||||
main_model = models.Model(args.model, weak_model=args.weak_model)
|
||||
main_model = models.Model(
|
||||
args.model,
|
||||
weak_model=args.weak_model,
|
||||
editor_model=args.editor_model,
|
||||
editor_edit_format=args.editor_edit_format,
|
||||
)
|
||||
|
||||
if args.verbose:
|
||||
io.tool_output("Model info:")
|
||||
@@ -534,11 +645,12 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
|
||||
if args.show_model_warnings:
|
||||
problem = models.sanity_check_models(io, main_model)
|
||||
if problem:
|
||||
analytics.event("model warning", main_model=main_model)
|
||||
io.tool_output("You can skip this check with --no-show-model-warnings")
|
||||
io.tool_output()
|
||||
|
||||
try:
|
||||
if not io.confirm_ask("Proceed anyway?"):
|
||||
return 1
|
||||
io.offer_url(urls.model_warnings, "Open documentation url for more info?")
|
||||
io.tool_output()
|
||||
except KeyboardInterrupt:
|
||||
return 1
|
||||
|
||||
@@ -561,8 +673,9 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
|
||||
if not sanity_check_repo(repo, io):
|
||||
return 1
|
||||
if not args.skip_sanity_check_repo:
|
||||
if not sanity_check_repo(repo, io):
|
||||
return 1
|
||||
|
||||
commands = Commands(
|
||||
io, None, verify_ssl=args.verify_ssl, args=args, parser=parser, verbose=args.verbose
|
||||
@@ -579,7 +692,7 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
|
||||
if not main_model.streaming:
|
||||
if args.stream:
|
||||
io.tool_warning(
|
||||
"Warning: Streaming is not supported by the selected model. Disabling streaming."
|
||||
f"Warning: Streaming is not supported by {main_model.name}. Disabling streaming."
|
||||
)
|
||||
args.stream = False
|
||||
|
||||
@@ -606,6 +719,7 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
|
||||
test_cmd=args.test_cmd,
|
||||
commands=commands,
|
||||
summarizer=summarizer,
|
||||
analytics=analytics,
|
||||
map_refresh=args.map_refresh,
|
||||
cache_prompts=args.cache_prompts,
|
||||
map_mul_no_files=args.map_multiplier_no_files,
|
||||
@@ -664,6 +778,10 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
|
||||
coder.apply_updates()
|
||||
return
|
||||
|
||||
if args.apply_clipboard_edits:
|
||||
args.edit_format = main_model.editor_edit_format
|
||||
args.message = "/paste"
|
||||
|
||||
if "VSCODE_GIT_IPC_HANDLE" in os.environ:
|
||||
args.pretty = False
|
||||
io.tool_output("VSCode terminal detected, pretty output has been disabled.")
|
||||
@@ -679,6 +797,9 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
|
||||
io.tool_output(f"Cur working dir: {Path.cwd()}")
|
||||
io.tool_output(f"Git working dir: {git_root}")
|
||||
|
||||
if args.load:
|
||||
commands.cmd_load(args.load)
|
||||
|
||||
if args.message:
|
||||
io.add_to_input_history(args.message)
|
||||
io.tool_output()
|
||||
@@ -704,6 +825,8 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
|
||||
if args.exit:
|
||||
return
|
||||
|
||||
analytics.event("cli session", main_model=main_model, edit_format=main_model.edit_format)
|
||||
|
||||
while True:
|
||||
try:
|
||||
coder.run()
|
||||
@@ -751,7 +874,8 @@ def check_and_load_imports(io, verbose=False):
|
||||
except Exception as err:
|
||||
io.tool_error(str(err))
|
||||
io.tool_output("Error loading required imports. Did you install aider properly?")
|
||||
io.tool_output("https://aider.chat/docs/install/install.html")
|
||||
io.offer_url(urls.install_properly, "Open documentation url for more info?")
|
||||
|
||||
sys.exit(1)
|
||||
|
||||
installs[str(key)] = True
|
||||
|
||||
553
aider/models.py
553
aider/models.py
@@ -13,7 +13,6 @@ import json5
|
||||
import yaml
|
||||
from PIL import Image
|
||||
|
||||
from aider import urls
|
||||
from aider.dump import dump # noqa: F401
|
||||
from aider.llm import litellm
|
||||
|
||||
@@ -53,9 +52,11 @@ ANTHROPIC_MODELS = """
|
||||
claude-2
|
||||
claude-2.1
|
||||
claude-3-haiku-20240307
|
||||
claude-3-5-haiku-20241022
|
||||
claude-3-opus-20240229
|
||||
claude-3-sonnet-20240229
|
||||
claude-3-5-sonnet-20240620
|
||||
claude-3-5-sonnet-20241022
|
||||
"""
|
||||
|
||||
ANTHROPIC_MODELS = [ln.strip() for ln in ANTHROPIC_MODELS.splitlines() if ln.strip()]
|
||||
@@ -69,17 +70,17 @@ class ModelSettings:
|
||||
weak_model_name: Optional[str] = None
|
||||
use_repo_map: bool = False
|
||||
send_undo_reply: bool = False
|
||||
accepts_images: bool = False
|
||||
lazy: bool = False
|
||||
reminder: str = "user"
|
||||
examples_as_sys_msg: bool = False
|
||||
extra_headers: Optional[dict] = None
|
||||
max_tokens: Optional[int] = None
|
||||
extra_params: Optional[dict] = None
|
||||
cache_control: bool = False
|
||||
caches_by_default: bool = False
|
||||
use_system_prompt: bool = True
|
||||
use_temperature: bool = True
|
||||
streaming: bool = True
|
||||
editor_model_name: Optional[str] = None
|
||||
editor_edit_format: Optional[str] = None
|
||||
|
||||
|
||||
# https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo
|
||||
@@ -124,7 +125,6 @@ MODEL_SETTINGS = [
|
||||
"udiff",
|
||||
weak_model_name="gpt-4o-mini",
|
||||
use_repo_map=True,
|
||||
accepts_images=True,
|
||||
lazy=True,
|
||||
reminder="sys",
|
||||
),
|
||||
@@ -133,7 +133,6 @@ MODEL_SETTINGS = [
|
||||
"udiff",
|
||||
weak_model_name="gpt-4o-mini",
|
||||
use_repo_map=True,
|
||||
accepts_images=True,
|
||||
lazy=True,
|
||||
reminder="sys",
|
||||
),
|
||||
@@ -142,16 +141,15 @@ MODEL_SETTINGS = [
|
||||
"diff",
|
||||
weak_model_name="gpt-4o-mini",
|
||||
use_repo_map=True,
|
||||
accepts_images=True,
|
||||
lazy=True,
|
||||
reminder="sys",
|
||||
editor_edit_format="editor-diff",
|
||||
),
|
||||
ModelSettings(
|
||||
"openai/gpt-4o-2024-08-06",
|
||||
"diff",
|
||||
weak_model_name="gpt-4o-mini",
|
||||
use_repo_map=True,
|
||||
accepts_images=True,
|
||||
lazy=True,
|
||||
reminder="sys",
|
||||
),
|
||||
@@ -160,7 +158,6 @@ MODEL_SETTINGS = [
|
||||
"diff",
|
||||
weak_model_name="gpt-4o-mini",
|
||||
use_repo_map=True,
|
||||
accepts_images=True,
|
||||
lazy=True,
|
||||
reminder="sys",
|
||||
),
|
||||
@@ -169,15 +166,14 @@ MODEL_SETTINGS = [
|
||||
"diff",
|
||||
weak_model_name="gpt-4o-mini",
|
||||
use_repo_map=True,
|
||||
accepts_images=True,
|
||||
lazy=True,
|
||||
reminder="sys",
|
||||
editor_edit_format="editor-diff",
|
||||
),
|
||||
ModelSettings(
|
||||
"gpt-4o-mini",
|
||||
"whole",
|
||||
weak_model_name="gpt-4o-mini",
|
||||
accepts_images=True,
|
||||
lazy=True,
|
||||
reminder="sys",
|
||||
),
|
||||
@@ -185,7 +181,6 @@ MODEL_SETTINGS = [
|
||||
"openai/gpt-4o-mini",
|
||||
"whole",
|
||||
weak_model_name="openai/gpt-4o-mini",
|
||||
accepts_images=True,
|
||||
lazy=True,
|
||||
reminder="sys",
|
||||
),
|
||||
@@ -211,7 +206,6 @@ MODEL_SETTINGS = [
|
||||
"diff",
|
||||
weak_model_name="gpt-4o-mini",
|
||||
use_repo_map=True,
|
||||
accepts_images=True,
|
||||
reminder="sys",
|
||||
),
|
||||
ModelSettings(
|
||||
@@ -240,30 +234,33 @@ MODEL_SETTINGS = [
|
||||
ModelSettings(
|
||||
"claude-3-opus-20240229",
|
||||
"diff",
|
||||
weak_model_name="claude-3-haiku-20240307",
|
||||
weak_model_name="claude-3-5-haiku-20241022",
|
||||
use_repo_map=True,
|
||||
),
|
||||
ModelSettings(
|
||||
"openrouter/anthropic/claude-3-opus",
|
||||
"diff",
|
||||
weak_model_name="openrouter/anthropic/claude-3-haiku",
|
||||
weak_model_name="openrouter/anthropic/claude-3-5-haiku",
|
||||
use_repo_map=True,
|
||||
),
|
||||
ModelSettings(
|
||||
"claude-3-sonnet-20240229",
|
||||
"whole",
|
||||
weak_model_name="claude-3-haiku-20240307",
|
||||
weak_model_name="claude-3-5-haiku-20241022",
|
||||
),
|
||||
ModelSettings(
|
||||
"claude-3-5-sonnet-20240620",
|
||||
"diff",
|
||||
weak_model_name="claude-3-haiku-20240307",
|
||||
weak_model_name="claude-3-5-haiku-20241022",
|
||||
editor_model_name="claude-3-5-sonnet-20240620",
|
||||
editor_edit_format="editor-diff",
|
||||
use_repo_map=True,
|
||||
examples_as_sys_msg=True,
|
||||
accepts_images=True,
|
||||
max_tokens=8192,
|
||||
extra_headers={
|
||||
"anthropic-beta": ANTHROPIC_BETA_HEADER,
|
||||
extra_params={
|
||||
"extra_headers": {
|
||||
"anthropic-beta": ANTHROPIC_BETA_HEADER,
|
||||
},
|
||||
"max_tokens": 8192,
|
||||
},
|
||||
cache_control=True,
|
||||
reminder="user",
|
||||
@@ -271,12 +268,84 @@ MODEL_SETTINGS = [
|
||||
ModelSettings(
|
||||
"anthropic/claude-3-5-sonnet-20240620",
|
||||
"diff",
|
||||
weak_model_name="claude-3-haiku-20240307",
|
||||
weak_model_name="anthropic/claude-3-5-haiku-20241022",
|
||||
editor_model_name="anthropic/claude-3-5-sonnet-20240620",
|
||||
editor_edit_format="editor-diff",
|
||||
use_repo_map=True,
|
||||
examples_as_sys_msg=True,
|
||||
max_tokens=8192,
|
||||
extra_headers={
|
||||
"anthropic-beta": ANTHROPIC_BETA_HEADER,
|
||||
extra_params={
|
||||
"extra_headers": {
|
||||
"anthropic-beta": ANTHROPIC_BETA_HEADER,
|
||||
},
|
||||
"max_tokens": 8192,
|
||||
},
|
||||
cache_control=True,
|
||||
reminder="user",
|
||||
),
|
||||
ModelSettings(
|
||||
"anthropic/claude-3-5-sonnet-20241022",
|
||||
"diff",
|
||||
weak_model_name="anthropic/claude-3-5-haiku-20241022",
|
||||
editor_model_name="anthropic/claude-3-5-sonnet-20241022",
|
||||
editor_edit_format="editor-diff",
|
||||
use_repo_map=True,
|
||||
examples_as_sys_msg=True,
|
||||
extra_params={
|
||||
"extra_headers": {
|
||||
"anthropic-beta": ANTHROPIC_BETA_HEADER,
|
||||
},
|
||||
"max_tokens": 8192,
|
||||
},
|
||||
cache_control=True,
|
||||
reminder="user",
|
||||
),
|
||||
ModelSettings(
|
||||
"bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0",
|
||||
"diff",
|
||||
weak_model_name="bedrock/anthropic.claude-3-5-haiku-20241022-v1:0",
|
||||
editor_model_name="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0",
|
||||
editor_edit_format="editor-diff",
|
||||
use_repo_map=True,
|
||||
examples_as_sys_msg=True,
|
||||
extra_params={
|
||||
"extra_headers": {
|
||||
"anthropic-beta": ANTHROPIC_BETA_HEADER,
|
||||
},
|
||||
"max_tokens": 8192,
|
||||
},
|
||||
cache_control=True,
|
||||
reminder="user",
|
||||
),
|
||||
ModelSettings(
|
||||
"anthropic/claude-3-5-sonnet-latest",
|
||||
"diff",
|
||||
weak_model_name="anthropic/claude-3-5-haiku-20241022",
|
||||
editor_model_name="anthropic/claude-3-5-sonnet-20241022",
|
||||
editor_edit_format="editor-diff",
|
||||
use_repo_map=True,
|
||||
examples_as_sys_msg=True,
|
||||
extra_params={
|
||||
"extra_headers": {
|
||||
"anthropic-beta": ANTHROPIC_BETA_HEADER,
|
||||
},
|
||||
"max_tokens": 8192,
|
||||
},
|
||||
cache_control=True,
|
||||
reminder="user",
|
||||
),
|
||||
ModelSettings(
|
||||
"claude-3-5-sonnet-20241022",
|
||||
"diff",
|
||||
weak_model_name="claude-3-5-haiku-20241022",
|
||||
editor_model_name="claude-3-5-sonnet-20241022",
|
||||
editor_edit_format="editor-diff",
|
||||
use_repo_map=True,
|
||||
examples_as_sys_msg=True,
|
||||
extra_params={
|
||||
"extra_headers": {
|
||||
"anthropic-beta": ANTHROPIC_BETA_HEADER,
|
||||
},
|
||||
"max_tokens": 8192,
|
||||
},
|
||||
cache_control=True,
|
||||
reminder="user",
|
||||
@@ -286,29 +355,96 @@ MODEL_SETTINGS = [
|
||||
"whole",
|
||||
weak_model_name="anthropic/claude-3-haiku-20240307",
|
||||
examples_as_sys_msg=True,
|
||||
extra_headers={
|
||||
"anthropic-beta": ANTHROPIC_BETA_HEADER,
|
||||
extra_params={
|
||||
"extra_headers": {
|
||||
"anthropic-beta": ANTHROPIC_BETA_HEADER,
|
||||
},
|
||||
},
|
||||
cache_control=True,
|
||||
),
|
||||
ModelSettings(
|
||||
"anthropic/claude-3-5-haiku-20241022",
|
||||
"diff",
|
||||
weak_model_name="anthropic/claude-3-5-haiku-20241022",
|
||||
use_repo_map=True,
|
||||
extra_params={
|
||||
"extra_headers": {
|
||||
"anthropic-beta": ANTHROPIC_BETA_HEADER,
|
||||
},
|
||||
},
|
||||
cache_control=True,
|
||||
),
|
||||
ModelSettings(
|
||||
"bedrock/anthropic.claude-3-5-haiku-20241022-v1:0",
|
||||
"diff",
|
||||
weak_model_name="bedrock/anthropic.claude-3-5-haiku-20241022-v1:0",
|
||||
use_repo_map=True,
|
||||
extra_params={
|
||||
"extra_headers": {
|
||||
"anthropic-beta": ANTHROPIC_BETA_HEADER,
|
||||
},
|
||||
},
|
||||
cache_control=True,
|
||||
),
|
||||
ModelSettings(
|
||||
"claude-3-5-haiku-20241022",
|
||||
"diff",
|
||||
weak_model_name="claude-3-5-haiku-20241022",
|
||||
use_repo_map=True,
|
||||
examples_as_sys_msg=True,
|
||||
extra_params={
|
||||
"extra_headers": {
|
||||
"anthropic-beta": ANTHROPIC_BETA_HEADER,
|
||||
},
|
||||
},
|
||||
cache_control=True,
|
||||
),
|
||||
ModelSettings(
|
||||
"vertex_ai/claude-3-5-haiku@20241022",
|
||||
"diff",
|
||||
weak_model_name="vertex_ai/claude-3-5-haiku@20241022",
|
||||
use_repo_map=True,
|
||||
extra_params={
|
||||
"max_tokens": 4096,
|
||||
},
|
||||
),
|
||||
ModelSettings(
|
||||
"claude-3-haiku-20240307",
|
||||
"whole",
|
||||
weak_model_name="claude-3-haiku-20240307",
|
||||
examples_as_sys_msg=True,
|
||||
extra_headers={
|
||||
"anthropic-beta": ANTHROPIC_BETA_HEADER,
|
||||
extra_params={
|
||||
"extra_headers": {
|
||||
"anthropic-beta": ANTHROPIC_BETA_HEADER,
|
||||
},
|
||||
},
|
||||
cache_control=True,
|
||||
),
|
||||
ModelSettings(
|
||||
"openrouter/anthropic/claude-3.5-sonnet",
|
||||
"diff",
|
||||
weak_model_name="openrouter/anthropic/claude-3-haiku-20240307",
|
||||
weak_model_name="openrouter/anthropic/claude-3-5-haiku",
|
||||
editor_model_name="openrouter/anthropic/claude-3.5-sonnet",
|
||||
editor_edit_format="editor-diff",
|
||||
use_repo_map=True,
|
||||
examples_as_sys_msg=True,
|
||||
accepts_images=True,
|
||||
max_tokens=8192,
|
||||
extra_params={
|
||||
"max_tokens": 8192,
|
||||
},
|
||||
reminder="user",
|
||||
cache_control=True,
|
||||
),
|
||||
ModelSettings(
|
||||
"openrouter/anthropic/claude-3.5-sonnet:beta",
|
||||
"diff",
|
||||
weak_model_name="openrouter/anthropic/claude-3-5-haiku:beta",
|
||||
editor_model_name="openrouter/anthropic/claude-3.5-sonnet:beta",
|
||||
editor_edit_format="editor-diff",
|
||||
use_repo_map=True,
|
||||
examples_as_sys_msg=True,
|
||||
extra_params={
|
||||
"max_tokens": 8192,
|
||||
},
|
||||
reminder="user",
|
||||
cache_control=True,
|
||||
),
|
||||
@@ -317,23 +453,39 @@ MODEL_SETTINGS = [
|
||||
ModelSettings(
|
||||
"vertex_ai/claude-3-5-sonnet@20240620",
|
||||
"diff",
|
||||
weak_model_name="vertex_ai/claude-3-haiku@20240307",
|
||||
weak_model_name="vertex_ai/claude-3-5-haiku@20241022",
|
||||
editor_model_name="vertex_ai/claude-3-5-sonnet@20240620",
|
||||
editor_edit_format="editor-diff",
|
||||
use_repo_map=True,
|
||||
examples_as_sys_msg=True,
|
||||
accepts_images=True,
|
||||
max_tokens=8192,
|
||||
extra_params={
|
||||
"max_tokens": 8192,
|
||||
},
|
||||
reminder="user",
|
||||
),
|
||||
ModelSettings(
|
||||
"vertex_ai/claude-3-5-sonnet-v2@20241022",
|
||||
"diff",
|
||||
weak_model_name="vertex_ai/claude-3-5-haiku@20241022",
|
||||
editor_model_name="vertex_ai/claude-3-5-sonnet-v2@20241022",
|
||||
editor_edit_format="editor-diff",
|
||||
use_repo_map=True,
|
||||
examples_as_sys_msg=True,
|
||||
extra_params={
|
||||
"max_tokens": 8192,
|
||||
},
|
||||
reminder="user",
|
||||
),
|
||||
ModelSettings(
|
||||
"vertex_ai/claude-3-opus@20240229",
|
||||
"diff",
|
||||
weak_model_name="vertex_ai/claude-3-haiku@20240307",
|
||||
weak_model_name="vertex_ai/claude-3-5-haiku@20241022",
|
||||
use_repo_map=True,
|
||||
),
|
||||
ModelSettings(
|
||||
"vertex_ai/claude-3-sonnet@20240229",
|
||||
"whole",
|
||||
weak_model_name="vertex_ai/claude-3-haiku@20240307",
|
||||
weak_model_name="vertex_ai/claude-3-5-haiku@20241022",
|
||||
),
|
||||
# Cohere
|
||||
ModelSettings(
|
||||
@@ -374,6 +526,15 @@ MODEL_SETTINGS = [
|
||||
examples_as_sys_msg=True,
|
||||
),
|
||||
# Gemini
|
||||
ModelSettings(
|
||||
"gemini/gemini-1.5-pro-002",
|
||||
"diff",
|
||||
use_repo_map=True,
|
||||
),
|
||||
ModelSettings(
|
||||
"gemini/gemini-1.5-flash-002",
|
||||
"whole",
|
||||
),
|
||||
ModelSettings(
|
||||
"gemini/gemini-1.5-pro",
|
||||
"diff-fenced",
|
||||
@@ -389,6 +550,11 @@ MODEL_SETTINGS = [
|
||||
"diff-fenced",
|
||||
use_repo_map=True,
|
||||
),
|
||||
ModelSettings(
|
||||
"vertex_ai/gemini-pro-experimental",
|
||||
"diff-fenced",
|
||||
use_repo_map=True,
|
||||
),
|
||||
ModelSettings(
|
||||
"gemini/gemini-1.5-flash-exp-0827",
|
||||
"whole",
|
||||
@@ -401,7 +567,9 @@ MODEL_SETTINGS = [
|
||||
use_repo_map=True,
|
||||
examples_as_sys_msg=True,
|
||||
reminder="sys",
|
||||
max_tokens=8192,
|
||||
extra_params={
|
||||
"max_tokens": 8192,
|
||||
},
|
||||
),
|
||||
ModelSettings(
|
||||
"deepseek/deepseek-coder",
|
||||
@@ -410,7 +578,30 @@ MODEL_SETTINGS = [
|
||||
examples_as_sys_msg=True,
|
||||
reminder="sys",
|
||||
caches_by_default=True,
|
||||
max_tokens=8192,
|
||||
extra_params={
|
||||
"max_tokens": 8192,
|
||||
},
|
||||
),
|
||||
ModelSettings(
|
||||
"deepseek-chat",
|
||||
"diff",
|
||||
use_repo_map=True,
|
||||
examples_as_sys_msg=True,
|
||||
reminder="sys",
|
||||
extra_params={
|
||||
"max_tokens": 8192,
|
||||
},
|
||||
),
|
||||
ModelSettings(
|
||||
"deepseek-coder",
|
||||
"diff",
|
||||
use_repo_map=True,
|
||||
examples_as_sys_msg=True,
|
||||
reminder="sys",
|
||||
caches_by_default=True,
|
||||
extra_params={
|
||||
"max_tokens": 8192,
|
||||
},
|
||||
),
|
||||
ModelSettings(
|
||||
"openrouter/deepseek/deepseek-coder",
|
||||
@@ -424,14 +615,28 @@ MODEL_SETTINGS = [
|
||||
"diff",
|
||||
weak_model_name="openrouter/openai/gpt-4o-mini",
|
||||
use_repo_map=True,
|
||||
accepts_images=True,
|
||||
lazy=True,
|
||||
reminder="sys",
|
||||
editor_edit_format="editor-diff",
|
||||
),
|
||||
ModelSettings(
|
||||
"openai/o1-mini",
|
||||
"whole",
|
||||
weak_model_name="openai/gpt-4o-mini",
|
||||
editor_model_name="openai/gpt-4o",
|
||||
editor_edit_format="editor-diff",
|
||||
use_repo_map=True,
|
||||
reminder="user",
|
||||
use_system_prompt=False,
|
||||
use_temperature=False,
|
||||
streaming=False,
|
||||
),
|
||||
ModelSettings(
|
||||
"azure/o1-mini",
|
||||
"whole",
|
||||
weak_model_name="azure/gpt-4o-mini",
|
||||
editor_model_name="azure/gpt-4o",
|
||||
editor_edit_format="editor-diff",
|
||||
use_repo_map=True,
|
||||
reminder="user",
|
||||
use_system_prompt=False,
|
||||
@@ -442,6 +647,8 @@ MODEL_SETTINGS = [
|
||||
"o1-mini",
|
||||
"whole",
|
||||
weak_model_name="gpt-4o-mini",
|
||||
editor_model_name="gpt-4o",
|
||||
editor_edit_format="editor-diff",
|
||||
use_repo_map=True,
|
||||
reminder="user",
|
||||
use_system_prompt=False,
|
||||
@@ -452,6 +659,20 @@ MODEL_SETTINGS = [
|
||||
"openai/o1-preview",
|
||||
"diff",
|
||||
weak_model_name="openai/gpt-4o-mini",
|
||||
editor_model_name="openai/gpt-4o",
|
||||
editor_edit_format="editor-diff",
|
||||
use_repo_map=True,
|
||||
reminder="user",
|
||||
use_system_prompt=False,
|
||||
use_temperature=False,
|
||||
streaming=False,
|
||||
),
|
||||
ModelSettings(
|
||||
"azure/o1-preview",
|
||||
"diff",
|
||||
weak_model_name="azure/gpt-4o-mini",
|
||||
editor_model_name="azure/gpt-4o",
|
||||
editor_edit_format="editor-diff",
|
||||
use_repo_map=True,
|
||||
reminder="user",
|
||||
use_system_prompt=False,
|
||||
@@ -460,8 +681,10 @@ MODEL_SETTINGS = [
|
||||
),
|
||||
ModelSettings(
|
||||
"o1-preview",
|
||||
"diff",
|
||||
"architect",
|
||||
weak_model_name="gpt-4o-mini",
|
||||
editor_model_name="gpt-4o",
|
||||
editor_edit_format="editor-diff",
|
||||
use_repo_map=True,
|
||||
reminder="user",
|
||||
use_system_prompt=False,
|
||||
@@ -472,6 +695,8 @@ MODEL_SETTINGS = [
|
||||
"openrouter/openai/o1-mini",
|
||||
"whole",
|
||||
weak_model_name="openrouter/openai/gpt-4o-mini",
|
||||
editor_model_name="openrouter/openai/gpt-4o",
|
||||
editor_edit_format="editor-diff",
|
||||
use_repo_map=True,
|
||||
reminder="user",
|
||||
use_system_prompt=False,
|
||||
@@ -482,92 +707,107 @@ MODEL_SETTINGS = [
|
||||
"openrouter/openai/o1-preview",
|
||||
"diff",
|
||||
weak_model_name="openrouter/openai/gpt-4o-mini",
|
||||
editor_model_name="openrouter/openai/gpt-4o",
|
||||
editor_edit_format="editor-diff",
|
||||
use_repo_map=True,
|
||||
reminder="user",
|
||||
use_system_prompt=False,
|
||||
use_temperature=False,
|
||||
streaming=False,
|
||||
),
|
||||
ModelSettings(
|
||||
"openrouter/qwen/qwen-2.5-coder-32b-instruct",
|
||||
"diff",
|
||||
weak_model_name="openrouter/qwen/qwen-2.5-coder-32b-instruct",
|
||||
editor_model_name="openrouter/qwen/qwen-2.5-coder-32b-instruct",
|
||||
editor_edit_format="editor-diff",
|
||||
use_repo_map=True,
|
||||
),
|
||||
]
|
||||
|
||||
|
||||
model_info_url = (
|
||||
"https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json"
|
||||
)
|
||||
class ModelInfoManager:
|
||||
MODEL_INFO_URL = (
|
||||
"https://raw.githubusercontent.com/BerriAI/litellm/main/"
|
||||
"model_prices_and_context_window.json"
|
||||
)
|
||||
CACHE_TTL = 60 * 60 * 24 # 24 hours
|
||||
|
||||
def __init__(self):
|
||||
self.cache_dir = Path.home() / ".aider" / "caches"
|
||||
self.cache_file = self.cache_dir / "model_prices_and_context_window.json"
|
||||
self.content = None
|
||||
self._load_cache()
|
||||
|
||||
def get_model_flexible(model, content):
|
||||
info = content.get(model, dict())
|
||||
if info:
|
||||
return info
|
||||
|
||||
pieces = model.split("/")
|
||||
if len(pieces) == 2:
|
||||
info = content.get(pieces[1])
|
||||
if info and info.get("litellm_provider") == pieces[0]:
|
||||
return info
|
||||
|
||||
return dict()
|
||||
|
||||
|
||||
def get_model_info(model):
|
||||
if not litellm._lazy_module:
|
||||
cache_dir = Path.home() / ".aider" / "caches"
|
||||
cache_file = cache_dir / "model_prices_and_context_window.json"
|
||||
|
||||
def _load_cache(self):
|
||||
try:
|
||||
cache_dir.mkdir(parents=True, exist_ok=True)
|
||||
use_cache = True
|
||||
self.cache_dir.mkdir(parents=True, exist_ok=True)
|
||||
if self.cache_file.exists():
|
||||
cache_age = time.time() - self.cache_file.stat().st_mtime
|
||||
if cache_age < self.CACHE_TTL:
|
||||
self.content = json.loads(self.cache_file.read_text())
|
||||
except OSError:
|
||||
# If we can't create the cache directory, we'll skip using the cache
|
||||
use_cache = False
|
||||
|
||||
if use_cache:
|
||||
current_time = time.time()
|
||||
cache_age = (
|
||||
current_time - cache_file.stat().st_mtime if cache_file.exists() else float("inf")
|
||||
)
|
||||
|
||||
if cache_age < 60 * 60 * 24:
|
||||
try:
|
||||
content = json.loads(cache_file.read_text())
|
||||
res = get_model_flexible(model, content)
|
||||
if res:
|
||||
return res
|
||||
except Exception as ex:
|
||||
print(str(ex))
|
||||
|
||||
import requests
|
||||
pass
|
||||
|
||||
def _update_cache(self):
|
||||
try:
|
||||
response = requests.get(model_info_url, timeout=5)
|
||||
import requests
|
||||
|
||||
response = requests.get(self.MODEL_INFO_URL, timeout=5)
|
||||
if response.status_code == 200:
|
||||
content = response.json()
|
||||
if use_cache:
|
||||
try:
|
||||
cache_file.write_text(json.dumps(content, indent=4))
|
||||
except OSError:
|
||||
# If we can't write to the cache file, we'll just skip caching
|
||||
pass
|
||||
res = get_model_flexible(model, content)
|
||||
if res:
|
||||
return res
|
||||
self.content = response.json()
|
||||
try:
|
||||
self.cache_file.write_text(json.dumps(self.content, indent=4))
|
||||
except OSError:
|
||||
pass
|
||||
except Exception as ex:
|
||||
print(str(ex))
|
||||
|
||||
# If all else fails, do it the slow way...
|
||||
try:
|
||||
info = litellm.get_model_info(model)
|
||||
return info
|
||||
except Exception:
|
||||
def get_model_from_cached_json_db(self, model):
|
||||
if not self.content:
|
||||
self._update_cache()
|
||||
|
||||
if not self.content:
|
||||
return dict()
|
||||
|
||||
info = self.content.get(model, dict())
|
||||
if info:
|
||||
return info
|
||||
|
||||
pieces = model.split("/")
|
||||
if len(pieces) == 2:
|
||||
info = self.content.get(pieces[1])
|
||||
if info and info.get("litellm_provider") == pieces[0]:
|
||||
return info
|
||||
|
||||
return dict()
|
||||
|
||||
def get_model_info(self, model):
|
||||
cached_info = self.get_model_from_cached_json_db(model)
|
||||
|
||||
litellm_info = None
|
||||
if litellm._lazy_module or not cached_info:
|
||||
try:
|
||||
litellm_info = litellm.get_model_info(model)
|
||||
except Exception as ex:
|
||||
if "model_prices_and_context_window.json" not in str(ex):
|
||||
print(str(ex))
|
||||
|
||||
if litellm_info:
|
||||
return litellm_info
|
||||
|
||||
return cached_info
|
||||
|
||||
|
||||
model_info_manager = ModelInfoManager()
|
||||
|
||||
|
||||
class Model(ModelSettings):
|
||||
def __init__(self, model, weak_model=None):
|
||||
def __init__(self, model, weak_model=None, editor_model=None, editor_edit_format=None):
|
||||
self.name = model
|
||||
self.max_chat_history_tokens = 1024
|
||||
self.weak_model = None
|
||||
self.editor_model = None
|
||||
|
||||
self.info = self.get_model_info(model)
|
||||
|
||||
@@ -588,8 +828,13 @@ class Model(ModelSettings):
|
||||
else:
|
||||
self.get_weak_model(weak_model)
|
||||
|
||||
if editor_model is False:
|
||||
self.editor_model_name = None
|
||||
else:
|
||||
self.get_editor_model(editor_model, editor_edit_format)
|
||||
|
||||
def get_model_info(self, model):
|
||||
return get_model_info(model)
|
||||
return model_info_manager.get_model_info(model)
|
||||
|
||||
def configure_model_settings(self, model):
|
||||
for ms in MODEL_SETTINGS:
|
||||
@@ -628,7 +873,23 @@ class Model(ModelSettings):
|
||||
self.edit_format = "diff"
|
||||
self.use_repo_map = True
|
||||
self.examples_as_sys_msg = True
|
||||
self.reminder = None
|
||||
self.reminder = "user"
|
||||
|
||||
if model.startswith("o1-") or "/o1-" in model:
|
||||
self.use_system_prompt = False
|
||||
self.use_temperature = False
|
||||
self.streaming = False
|
||||
|
||||
if (
|
||||
"qwen" in model
|
||||
and "coder" in model
|
||||
and ("2.5" in model or "2-5" in model)
|
||||
and "32b" in model
|
||||
):
|
||||
"openrouter/qwen/qwen-2.5-coder-32b-instruct",
|
||||
self.edit_format = "diff"
|
||||
self.editor_edit_format = "editor-diff"
|
||||
self.use_repo_map = True
|
||||
|
||||
# use the defaults
|
||||
if self.edit_format == "diff":
|
||||
@@ -659,6 +920,26 @@ class Model(ModelSettings):
|
||||
def commit_message_models(self):
|
||||
return [self.weak_model, self]
|
||||
|
||||
def get_editor_model(self, provided_editor_model_name, editor_edit_format):
|
||||
# If editor_model_name is provided, override the model settings
|
||||
if provided_editor_model_name:
|
||||
self.editor_model_name = provided_editor_model_name
|
||||
if editor_edit_format:
|
||||
self.editor_edit_format = editor_edit_format
|
||||
|
||||
if not self.editor_model_name or self.editor_model_name == self.name:
|
||||
self.editor_model = self
|
||||
else:
|
||||
self.editor_model = Model(
|
||||
self.editor_model_name,
|
||||
editor_model=False,
|
||||
)
|
||||
|
||||
if not self.editor_edit_format:
|
||||
self.editor_edit_format = self.editor_model.edit_format
|
||||
|
||||
return self.editor_model
|
||||
|
||||
def tokenizer(self, text):
|
||||
return litellm.encode(model=self.name, text=text)
|
||||
|
||||
@@ -796,8 +1077,14 @@ def register_litellm_models(model_fnames):
|
||||
continue
|
||||
|
||||
try:
|
||||
with open(model_fname, "r") as model_def_file:
|
||||
model_def = json5.load(model_def_file)
|
||||
data = Path(model_fname).read_text()
|
||||
if not data.strip():
|
||||
continue
|
||||
model_def = json5.loads(data)
|
||||
if not model_def:
|
||||
continue
|
||||
|
||||
# only load litellm if we have actual data
|
||||
litellm._load_litellm()
|
||||
litellm.register_model(model_def)
|
||||
except Exception as e:
|
||||
@@ -819,11 +1106,21 @@ def validate_variables(vars):
|
||||
|
||||
|
||||
def sanity_check_models(io, main_model):
|
||||
problem_main = sanity_check_model(io, main_model)
|
||||
|
||||
problem_weak = None
|
||||
problem_strong = sanity_check_model(io, main_model)
|
||||
if main_model.weak_model and main_model.weak_model is not main_model:
|
||||
problem_weak = sanity_check_model(io, main_model.weak_model)
|
||||
return problem_strong or problem_weak
|
||||
|
||||
problem_editor = None
|
||||
if (
|
||||
main_model.editor_model
|
||||
and main_model.editor_model is not main_model
|
||||
and main_model.editor_model is not main_model.weak_model
|
||||
):
|
||||
problem_editor = sanity_check_model(io, main_model.editor_model)
|
||||
|
||||
return problem_main or problem_weak or problem_editor
|
||||
|
||||
|
||||
def sanity_check_model(io, model):
|
||||
@@ -834,7 +1131,7 @@ def sanity_check_model(io, model):
|
||||
io.tool_warning(f"Warning: {model} expects these environment variables")
|
||||
for key in model.missing_keys:
|
||||
value = os.environ.get(key, "")
|
||||
status = "✓ Set" if value else "✗ Not set"
|
||||
status = "Set" if value else "Not set"
|
||||
io.tool_output(f"- {key}: {status}")
|
||||
|
||||
if platform.system() == "Windows" or True:
|
||||
@@ -859,9 +1156,6 @@ def sanity_check_model(io, model):
|
||||
for match in possible_matches:
|
||||
io.tool_output(f"- {match}")
|
||||
|
||||
if show:
|
||||
io.tool_output(f"For more info, see: {urls.model_warnings}")
|
||||
|
||||
return show
|
||||
|
||||
|
||||
@@ -914,20 +1208,37 @@ def print_matching_models(io, search):
|
||||
io.tool_output(f'No models match "{search}".')
|
||||
|
||||
|
||||
def get_model_settings_as_yaml():
|
||||
import yaml
|
||||
|
||||
model_settings_list = []
|
||||
for ms in MODEL_SETTINGS:
|
||||
model_settings_dict = {
|
||||
field.name: getattr(ms, field.name) for field in fields(ModelSettings)
|
||||
}
|
||||
model_settings_list.append(model_settings_dict)
|
||||
|
||||
return yaml.dump(model_settings_list, default_flow_style=False)
|
||||
|
||||
|
||||
def main():
|
||||
if len(sys.argv) != 2:
|
||||
print("Usage: python models.py <model_name>")
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: python models.py <model_name> or python models.py --yaml")
|
||||
sys.exit(1)
|
||||
|
||||
model_name = sys.argv[1]
|
||||
matching_models = fuzzy_match_models(model_name)
|
||||
|
||||
if matching_models:
|
||||
print(f"Matching models for '{model_name}':")
|
||||
for model in matching_models:
|
||||
print(model)
|
||||
if sys.argv[1] == "--yaml":
|
||||
yaml_string = get_model_settings_as_yaml()
|
||||
print(yaml_string)
|
||||
else:
|
||||
print(f"No matching models found for '{model_name}'.")
|
||||
model_name = sys.argv[1]
|
||||
matching_models = fuzzy_match_models(model_name)
|
||||
|
||||
if matching_models:
|
||||
print(f"Matching models for '{model_name}':")
|
||||
for model in matching_models:
|
||||
print(model)
|
||||
else:
|
||||
print(f"No matching models found for '{model_name}'.")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@@ -5,15 +5,21 @@
|
||||
|
||||
# Conventional Commits text adapted from:
|
||||
# https://www.conventionalcommits.org/en/v1.0.0/#summary
|
||||
commit_system = """You are an expert software engineer.
|
||||
commit_system = """You are an expert software engineer that generates concise, \
|
||||
one-line Git commit messages based on the provided diffs.
|
||||
Review the provided context and diffs which are about to be committed to a git repo.
|
||||
Review the diffs carefully.
|
||||
Generate a commit message for those changes.
|
||||
The commit message MUST use the imperative tense.
|
||||
Generate a one-line commit message for those changes.
|
||||
The commit message should be structured as follows: <type>: <description>
|
||||
Use these for <type>: fix, feat, build, chore, ci, docs, style, refactor, perf, test
|
||||
Reply with JUST the commit message, without quotes, comments, questions, etc!
|
||||
Reply with one line only!
|
||||
|
||||
Ensure the commit message:
|
||||
- Starts with the appropriate prefix.
|
||||
- Is in the imperative mood (e.g., \"Add feature\" not \"Added feature\" or \"Adding feature\").
|
||||
- Does not exceed 72 characters.
|
||||
|
||||
Reply only with the one-line commit message, without any additional text, explanations, \
|
||||
or line breaks.
|
||||
"""
|
||||
|
||||
# COMMANDS
|
||||
|
||||
@@ -10,7 +10,15 @@ from aider.sendchat import simple_send_with_retries
|
||||
|
||||
from .dump import dump # noqa: F401
|
||||
|
||||
ANY_GIT_ERROR = (git.exc.ODBError, git.exc.GitError, OSError, IndexError, BufferError)
|
||||
ANY_GIT_ERROR = (
|
||||
git.exc.ODBError,
|
||||
git.exc.GitError,
|
||||
OSError,
|
||||
IndexError,
|
||||
BufferError,
|
||||
TypeError,
|
||||
ValueError,
|
||||
)
|
||||
|
||||
|
||||
class GitRepo:
|
||||
@@ -161,7 +169,7 @@ class GitRepo:
|
||||
def get_rel_repo_dir(self):
|
||||
try:
|
||||
return os.path.relpath(self.repo.git_dir, os.getcwd())
|
||||
except ValueError:
|
||||
except (ValueError, OSError):
|
||||
return self.repo.git_dir
|
||||
|
||||
def get_commit_message(self, diffs, context):
|
||||
@@ -185,7 +193,7 @@ class GitRepo:
|
||||
if max_tokens and num_tokens > max_tokens:
|
||||
continue
|
||||
commit_message = simple_send_with_retries(
|
||||
model.name, messages, extra_headers=model.extra_headers
|
||||
model.name, messages, extra_params=model.extra_params
|
||||
)
|
||||
if commit_message:
|
||||
break
|
||||
@@ -323,6 +331,15 @@ class GitRepo:
|
||||
lines,
|
||||
)
|
||||
|
||||
def git_ignored_file(self, path):
|
||||
if not self.repo:
|
||||
return
|
||||
try:
|
||||
if self.repo.ignored(path):
|
||||
return True
|
||||
except ANY_GIT_ERROR:
|
||||
return False
|
||||
|
||||
def ignored_file(self, fname):
|
||||
self.refresh_aider_ignore()
|
||||
|
||||
@@ -336,7 +353,14 @@ class GitRepo:
|
||||
def ignored_file_raw(self, fname):
|
||||
if self.subtree_only:
|
||||
fname_path = Path(self.normalize_path(fname))
|
||||
cwd_path = Path.cwd().resolve().relative_to(Path(self.root).resolve())
|
||||
try:
|
||||
cwd_path = Path.cwd().resolve().relative_to(Path(self.root).resolve())
|
||||
except ValueError:
|
||||
# Issue #1524
|
||||
# ValueError: 'C:\\dev\\squid-certbot' is not in the subpath of
|
||||
# 'C:\\dev\\squid-certbot'
|
||||
# Clearly, fname is not under cwd... so ignore it
|
||||
return True
|
||||
|
||||
if cwd_path not in fname_path.parents and fname_path != cwd_path:
|
||||
return True
|
||||
|
||||
@@ -2,6 +2,7 @@ import colorsys
|
||||
import math
|
||||
import os
|
||||
import random
|
||||
import shutil
|
||||
import sqlite3
|
||||
import sys
|
||||
import time
|
||||
@@ -27,7 +28,7 @@ from tree_sitter_languages import get_language, get_parser # noqa: E402
|
||||
Tag = namedtuple("Tag", "rel_fname fname line name kind".split())
|
||||
|
||||
|
||||
SQLITE_ERRORS = (sqlite3.OperationalError, sqlite3.DatabaseError)
|
||||
SQLITE_ERRORS = (sqlite3.OperationalError, sqlite3.DatabaseError, OSError)
|
||||
|
||||
|
||||
class RepoMap:
|
||||
@@ -166,13 +167,52 @@ class RepoMap:
|
||||
# Just return the full fname.
|
||||
return fname
|
||||
|
||||
def tags_cache_error(self, original_error=None):
|
||||
"""Handle SQLite errors by trying to recreate cache, falling back to dict if needed"""
|
||||
|
||||
if self.verbose and original_error:
|
||||
self.io.tool_warning(f"Tags cache error: {str(original_error)}")
|
||||
|
||||
if isinstance(getattr(self, "TAGS_CACHE", None), dict):
|
||||
return
|
||||
|
||||
path = Path(self.root) / self.TAGS_CACHE_DIR
|
||||
|
||||
# Try to recreate the cache
|
||||
try:
|
||||
# Delete existing cache dir
|
||||
if path.exists():
|
||||
shutil.rmtree(path)
|
||||
|
||||
# Try to create new cache
|
||||
new_cache = Cache(path)
|
||||
|
||||
# Test that it works
|
||||
test_key = "test"
|
||||
new_cache[test_key] = "test"
|
||||
_ = new_cache[test_key]
|
||||
del new_cache[test_key]
|
||||
|
||||
# If we got here, the new cache works
|
||||
self.TAGS_CACHE = new_cache
|
||||
return
|
||||
|
||||
except SQLITE_ERRORS as e:
|
||||
# If anything goes wrong, warn and fall back to dict
|
||||
self.io.tool_warning(
|
||||
f"Unable to use tags cache at {path}, falling back to memory cache"
|
||||
)
|
||||
if self.verbose:
|
||||
self.io.tool_warning(f"Cache recreation error: {str(e)}")
|
||||
|
||||
self.TAGS_CACHE = dict()
|
||||
|
||||
def load_tags_cache(self):
|
||||
path = Path(self.root) / self.TAGS_CACHE_DIR
|
||||
try:
|
||||
self.TAGS_CACHE = Cache(path)
|
||||
except SQLITE_ERRORS:
|
||||
self.io.tool_warning(f"Unable to use tags cache, delete {path} to resolve.")
|
||||
self.TAGS_CACHE = dict()
|
||||
except SQLITE_ERRORS as e:
|
||||
self.tags_cache_error(e)
|
||||
|
||||
def save_tags_cache(self):
|
||||
pass
|
||||
@@ -190,9 +230,18 @@ class RepoMap:
|
||||
return []
|
||||
|
||||
cache_key = fname
|
||||
val = self.TAGS_CACHE.get(cache_key) # Issue #1308
|
||||
try:
|
||||
val = self.TAGS_CACHE.get(cache_key) # Issue #1308
|
||||
except SQLITE_ERRORS as e:
|
||||
self.tags_cache_error(e)
|
||||
val = self.TAGS_CACHE.get(cache_key)
|
||||
|
||||
if val is not None and val.get("mtime") == file_mtime:
|
||||
return self.TAGS_CACHE[cache_key]["data"]
|
||||
try:
|
||||
return self.TAGS_CACHE[cache_key]["data"]
|
||||
except SQLITE_ERRORS as e:
|
||||
self.tags_cache_error(e)
|
||||
return self.TAGS_CACHE[cache_key]["data"]
|
||||
|
||||
# miss!
|
||||
data = list(self.get_tags_raw(fname, rel_fname))
|
||||
@@ -201,8 +250,9 @@ class RepoMap:
|
||||
try:
|
||||
self.TAGS_CACHE[cache_key] = {"mtime": file_mtime, "data": data}
|
||||
self.save_tags_cache()
|
||||
except SQLITE_ERRORS:
|
||||
pass
|
||||
except SQLITE_ERRORS as e:
|
||||
self.tags_cache_error(e)
|
||||
self.TAGS_CACHE[cache_key] = {"mtime": file_mtime, "data": data}
|
||||
|
||||
return data
|
||||
|
||||
@@ -302,7 +352,13 @@ class RepoMap:
|
||||
# https://networkx.org/documentation/stable/_modules/networkx/algorithms/link_analysis/pagerank_alg.html#pagerank
|
||||
personalize = 100 / len(fnames)
|
||||
|
||||
if len(fnames) - len(self.TAGS_CACHE) > 100:
|
||||
try:
|
||||
cache_size = len(self.TAGS_CACHE)
|
||||
except SQLITE_ERRORS as e:
|
||||
self.tags_cache_error(e)
|
||||
cache_size = len(self.TAGS_CACHE)
|
||||
|
||||
if len(fnames) - cache_size > 100:
|
||||
self.io.tool_output(
|
||||
"Initial repo scan can be slow in larger repos, but only happens once."
|
||||
)
|
||||
@@ -312,6 +368,8 @@ class RepoMap:
|
||||
showing_bar = False
|
||||
|
||||
for fname in fnames:
|
||||
if self.verbose:
|
||||
self.io.tool_output(f"Processing {fname}")
|
||||
if progress and not showing_bar:
|
||||
progress()
|
||||
|
||||
@@ -398,7 +456,11 @@ class RepoMap:
|
||||
try:
|
||||
ranked = nx.pagerank(G, weight="weight", **pers_args)
|
||||
except ZeroDivisionError:
|
||||
return []
|
||||
# Issue #1536
|
||||
try:
|
||||
ranked = nx.pagerank(G, weight="weight")
|
||||
except ZeroDivisionError:
|
||||
return []
|
||||
|
||||
# distribute the rank from each source node, across all of its out edges
|
||||
ranked_definitions = defaultdict(float)
|
||||
@@ -415,7 +477,9 @@ class RepoMap:
|
||||
ranked_definitions[(dst, ident)] += data["rank"]
|
||||
|
||||
ranked_tags = []
|
||||
ranked_definitions = sorted(ranked_definitions.items(), reverse=True, key=lambda x: x[1])
|
||||
ranked_definitions = sorted(
|
||||
ranked_definitions.items(), reverse=True, key=lambda x: (x[1], x[0])
|
||||
)
|
||||
|
||||
# dump(ranked_definitions)
|
||||
|
||||
@@ -451,11 +515,18 @@ class RepoMap:
|
||||
force_refresh=False,
|
||||
):
|
||||
# Create a cache key
|
||||
cache_key = (
|
||||
cache_key = [
|
||||
tuple(sorted(chat_fnames)) if chat_fnames else None,
|
||||
tuple(sorted(other_fnames)) if other_fnames else None,
|
||||
max_map_tokens,
|
||||
)
|
||||
]
|
||||
|
||||
if self.refresh == "auto":
|
||||
cache_key += [
|
||||
tuple(sorted(mentioned_fnames)) if mentioned_fnames else None,
|
||||
tuple(sorted(mentioned_idents)) if mentioned_idents else None,
|
||||
]
|
||||
cache_key = tuple(cache_key)
|
||||
|
||||
use_cache = False
|
||||
if not force_refresh:
|
||||
|
||||
3
aider/resources/__init__.py
Normal file
3
aider/resources/__init__.py
Normal file
@@ -0,0 +1,3 @@
|
||||
# This ensures that importlib_resources.files("aider.resources")
|
||||
# doesn't raise ImportError, even if there are no other files in this
|
||||
# dir.
|
||||
11
aider/resources/model-metadata.json
Normal file
11
aider/resources/model-metadata.json
Normal file
@@ -0,0 +1,11 @@
|
||||
{
|
||||
"openrouter/qwen/qwen-2.5-coder-32b-instruct": {
|
||||
"max_tokens": 33792,
|
||||
"max_input_tokens": 33792,
|
||||
"max_output_tokens": 33792,
|
||||
"input_cost_per_token": 0.00000018,
|
||||
"output_cost_per_token": 0.00000018,
|
||||
"litellm_provider": "openrouter",
|
||||
"mode": "chat",
|
||||
},
|
||||
}
|
||||
@@ -185,7 +185,9 @@ class Scraper:
|
||||
|
||||
headers = {"User-Agent": f"Mozilla./5.0 ({aider_user_agent})"}
|
||||
try:
|
||||
with httpx.Client(headers=headers, verify=self.verify_ssl) as client:
|
||||
with httpx.Client(
|
||||
headers=headers, verify=self.verify_ssl, follow_redirects=True
|
||||
) as client:
|
||||
response = client.get(url)
|
||||
response.raise_for_status()
|
||||
return response.text, response.headers.get("content-type", "").split(";")[0]
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
import hashlib
|
||||
import json
|
||||
|
||||
import backoff
|
||||
import time
|
||||
|
||||
from aider.dump import dump # noqa: F401
|
||||
from aider.exceptions import LiteLLMExceptions
|
||||
from aider.llm import litellm
|
||||
|
||||
# from diskcache import Cache
|
||||
@@ -13,37 +13,7 @@ CACHE_PATH = "~/.aider.send.cache.v1"
|
||||
CACHE = None
|
||||
# CACHE = Cache(CACHE_PATH)
|
||||
|
||||
|
||||
def retry_exceptions():
|
||||
import httpx
|
||||
|
||||
return (
|
||||
httpx.ConnectError,
|
||||
httpx.RemoteProtocolError,
|
||||
httpx.ReadTimeout,
|
||||
litellm.exceptions.APIConnectionError,
|
||||
litellm.exceptions.APIError,
|
||||
litellm.exceptions.RateLimitError,
|
||||
litellm.exceptions.ServiceUnavailableError,
|
||||
litellm.exceptions.Timeout,
|
||||
litellm.exceptions.InternalServerError,
|
||||
litellm.llms.anthropic.chat.AnthropicError,
|
||||
)
|
||||
|
||||
|
||||
def lazy_litellm_retry_decorator(func):
|
||||
def wrapper(*args, **kwargs):
|
||||
decorated_func = backoff.on_exception(
|
||||
backoff.expo,
|
||||
retry_exceptions(),
|
||||
max_time=60,
|
||||
on_backoff=lambda details: print(
|
||||
f"{details.get('exception', 'Exception')}\nRetry in {details['wait']:.1f} seconds."
|
||||
),
|
||||
)(func)
|
||||
return decorated_func(*args, **kwargs)
|
||||
|
||||
return wrapper
|
||||
RETRY_TIMEOUT = 60
|
||||
|
||||
|
||||
def send_completion(
|
||||
@@ -52,11 +22,8 @@ def send_completion(
|
||||
functions,
|
||||
stream,
|
||||
temperature=0,
|
||||
extra_headers=None,
|
||||
max_tokens=None,
|
||||
extra_params=None,
|
||||
):
|
||||
from aider.llm import litellm
|
||||
|
||||
kwargs = dict(
|
||||
model=model_name,
|
||||
messages=messages,
|
||||
@@ -69,10 +36,9 @@ def send_completion(
|
||||
function = functions[0]
|
||||
kwargs["tools"] = [dict(type="function", function=function)]
|
||||
kwargs["tool_choice"] = {"type": "function", "function": {"name": function["name"]}}
|
||||
if extra_headers is not None:
|
||||
kwargs["extra_headers"] = extra_headers
|
||||
if max_tokens is not None:
|
||||
kwargs["max_tokens"] = max_tokens
|
||||
|
||||
if extra_params is not None:
|
||||
kwargs.update(extra_params)
|
||||
|
||||
key = json.dumps(kwargs, sort_keys=True).encode()
|
||||
|
||||
@@ -82,8 +48,6 @@ def send_completion(
|
||||
if not stream and CACHE is not None and key in CACHE:
|
||||
return hash_object, CACHE[key]
|
||||
|
||||
# del kwargs['stream']
|
||||
|
||||
res = litellm.completion(**kwargs)
|
||||
|
||||
if not stream and CACHE is not None:
|
||||
@@ -92,19 +56,42 @@ def send_completion(
|
||||
return hash_object, res
|
||||
|
||||
|
||||
@lazy_litellm_retry_decorator
|
||||
def simple_send_with_retries(model_name, messages, extra_headers=None):
|
||||
try:
|
||||
kwargs = {
|
||||
"model_name": model_name,
|
||||
"messages": messages,
|
||||
"functions": None,
|
||||
"stream": False,
|
||||
}
|
||||
if extra_headers is not None:
|
||||
kwargs["extra_headers"] = extra_headers
|
||||
def simple_send_with_retries(model_name, messages, extra_params=None):
|
||||
litellm_ex = LiteLLMExceptions()
|
||||
|
||||
_hash, response = send_completion(**kwargs)
|
||||
return response.choices[0].message.content
|
||||
except (AttributeError, litellm.exceptions.BadRequestError):
|
||||
return
|
||||
retry_delay = 0.125
|
||||
while True:
|
||||
try:
|
||||
kwargs = {
|
||||
"model_name": model_name,
|
||||
"messages": messages,
|
||||
"functions": None,
|
||||
"stream": False,
|
||||
"extra_params": extra_params,
|
||||
}
|
||||
|
||||
_hash, response = send_completion(**kwargs)
|
||||
if not response or not hasattr(response, "choices") or not response.choices:
|
||||
return None
|
||||
return response.choices[0].message.content
|
||||
except litellm_ex.exceptions_tuple() as err:
|
||||
ex_info = litellm_ex.get_ex_info(err)
|
||||
|
||||
print(str(err))
|
||||
if ex_info.description:
|
||||
print(ex_info.description)
|
||||
|
||||
should_retry = ex_info.retry
|
||||
if should_retry:
|
||||
retry_delay *= 2
|
||||
if retry_delay > RETRY_TIMEOUT:
|
||||
should_retry = False
|
||||
|
||||
if not should_retry:
|
||||
return None
|
||||
|
||||
print(f"Retrying in {retry_delay:.1f} seconds...")
|
||||
time.sleep(retry_delay)
|
||||
continue
|
||||
except AttributeError:
|
||||
return None
|
||||
|
||||
@@ -8,4 +8,7 @@ model_warnings = "https://aider.chat/docs/llms/warnings.html"
|
||||
token_limits = "https://aider.chat/docs/troubleshooting/token-limits.html"
|
||||
llms = "https://aider.chat/docs/llms.html"
|
||||
large_repos = "https://aider.chat/docs/faq.html#can-i-use-aider-in-a-large-mono-repo"
|
||||
github_issues = "https://github.com/paul-gauthier/aider/issues/new"
|
||||
github_issues = "https://github.com/Aider-AI/aider/issues/new"
|
||||
git_index_version = "https://github.com/Aider-AI/aider/issues/211"
|
||||
install_properly = "https://aider.chat/docs/troubleshooting/imports.html"
|
||||
analytics = "https://aider.chat/docs/more/analytics.html"
|
||||
|
||||
@@ -216,6 +216,9 @@ def get_pip_install(args):
|
||||
"-m",
|
||||
"pip",
|
||||
"install",
|
||||
"--upgrade",
|
||||
"--upgrade-strategy",
|
||||
"only-if-needed",
|
||||
]
|
||||
cmd += args
|
||||
return cmd
|
||||
@@ -272,8 +275,12 @@ class Spinner:
|
||||
self.start_time = time.time()
|
||||
self.last_update = 0
|
||||
self.visible = False
|
||||
self.is_tty = sys.stdout.isatty()
|
||||
|
||||
def step(self):
|
||||
if not self.is_tty:
|
||||
return
|
||||
|
||||
current_time = time.time()
|
||||
if not self.visible and current_time - self.start_time >= 0.5:
|
||||
self.visible = True
|
||||
@@ -289,7 +296,7 @@ class Spinner:
|
||||
print(f"\r{self.text} {next(self.spinner_chars)}\r{self.text} ", end="", flush=True)
|
||||
|
||||
def end(self):
|
||||
if self.visible:
|
||||
if self.visible and self.is_tty:
|
||||
print("\r" + " " * (len(self.text) + 3))
|
||||
|
||||
|
||||
@@ -346,7 +353,7 @@ def check_pip_install_extra(io, module, prompt, pip_install_cmd, self_update=Fal
|
||||
success, output = run_install(cmd)
|
||||
if success:
|
||||
if not module:
|
||||
return
|
||||
return True
|
||||
try:
|
||||
__import__(module)
|
||||
return True
|
||||
|
||||
@@ -21,7 +21,7 @@ def install_from_main_branch(io):
|
||||
io,
|
||||
None,
|
||||
"Install the development version of aider from the main branch?",
|
||||
["--upgrade", "git+https://github.com/paul-gauthier/aider.git"],
|
||||
["git+https://github.com/Aider-AI/aider.git"],
|
||||
self_update=True,
|
||||
)
|
||||
|
||||
@@ -50,7 +50,7 @@ def install_upgrade(io, latest_version=None):
|
||||
io,
|
||||
None,
|
||||
new_ver_text,
|
||||
["--upgrade", "aider-chat"],
|
||||
["aider-chat"],
|
||||
self_update=True,
|
||||
)
|
||||
|
||||
|
||||
@@ -3,18 +3,25 @@ import os
|
||||
import queue
|
||||
import tempfile
|
||||
import time
|
||||
import warnings
|
||||
|
||||
from prompt_toolkit.shortcuts import prompt
|
||||
|
||||
from aider.llm import litellm
|
||||
|
||||
from .dump import dump # noqa: F401
|
||||
|
||||
warnings.filterwarnings(
|
||||
"ignore", message="Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work"
|
||||
)
|
||||
|
||||
from pydub import AudioSegment # noqa
|
||||
|
||||
try:
|
||||
import soundfile as sf
|
||||
except (OSError, ModuleNotFoundError):
|
||||
sf = None
|
||||
|
||||
from prompt_toolkit.shortcuts import prompt
|
||||
|
||||
from .dump import dump # noqa: F401
|
||||
|
||||
|
||||
class SoundDeviceError(Exception):
|
||||
pass
|
||||
@@ -27,7 +34,7 @@ class Voice:
|
||||
|
||||
threshold = 0.15
|
||||
|
||||
def __init__(self):
|
||||
def __init__(self, audio_format="wav"):
|
||||
if sf is None:
|
||||
raise SoundDeviceError
|
||||
try:
|
||||
@@ -37,6 +44,9 @@ class Voice:
|
||||
self.sd = sd
|
||||
except (OSError, ModuleNotFoundError):
|
||||
raise SoundDeviceError
|
||||
if audio_format not in ["wav", "mp3", "webm"]:
|
||||
raise ValueError(f"Unsupported audio format: {audio_format}")
|
||||
self.audio_format = audio_format
|
||||
|
||||
def callback(self, indata, frames, time, status):
|
||||
"""This is called (from a separate thread) for each audio block."""
|
||||
@@ -80,7 +90,7 @@ class Voice:
|
||||
def raw_record_and_transcribe(self, history, language):
|
||||
self.q = queue.Queue()
|
||||
|
||||
filename = tempfile.mktemp(suffix=".wav")
|
||||
temp_wav = tempfile.mktemp(suffix=".wav")
|
||||
|
||||
try:
|
||||
sample_rate = int(self.sd.query_devices(None, "input")["default_samplerate"])
|
||||
@@ -99,10 +109,18 @@ class Voice:
|
||||
except self.sd.PortAudioError as err:
|
||||
raise SoundDeviceError(f"Error accessing audio input device: {err}")
|
||||
|
||||
with sf.SoundFile(filename, mode="x", samplerate=sample_rate, channels=1) as file:
|
||||
with sf.SoundFile(temp_wav, mode="x", samplerate=sample_rate, channels=1) as file:
|
||||
while not self.q.empty():
|
||||
file.write(self.q.get())
|
||||
|
||||
if self.audio_format != "wav":
|
||||
filename = tempfile.mktemp(suffix=f".{self.audio_format}")
|
||||
audio = AudioSegment.from_wav(temp_wav)
|
||||
audio.export(filename, format=self.audio_format)
|
||||
os.remove(temp_wav)
|
||||
else:
|
||||
filename = temp_wav
|
||||
|
||||
with open(filename, "rb") as fh:
|
||||
try:
|
||||
transcript = litellm.transcription(
|
||||
@@ -112,6 +130,9 @@ class Voice:
|
||||
print(f"Unable to transcribe {filename}: {err}")
|
||||
return
|
||||
|
||||
if self.audio_format != "wav":
|
||||
os.remove(filename)
|
||||
|
||||
text = transcript.text
|
||||
return text
|
||||
|
||||
|
||||
@@ -1,20 +1,144 @@
|
||||
---
|
||||
title: Release history
|
||||
parent: More info
|
||||
nav_order: 999
|
||||
nav_order: 900
|
||||
highlight_image: /assets/blame.jpg
|
||||
description: Release notes and stats on aider writing its own code.
|
||||
---
|
||||
|
||||
# Release history
|
||||
|
||||
{% include blame.md %}
|
||||
|
||||
The above
|
||||
[stats are based on the git commit history](/docs/faq.html#how-are-the-aider-wrote-xx-of-code-stats-computed)
|
||||
in the aider repo.
|
||||
|
||||
<!--[[[cog
|
||||
# This page is a copy of HISTORY.md, adding the front matter above.
|
||||
text = open("HISTORY.md").read()
|
||||
text = text.replace("# Release history", "")
|
||||
cog.out(text)
|
||||
]]]-->
|
||||
|
||||
# Release history
|
||||
|
||||
|
||||
### Aider v0.63.0
|
||||
|
||||
- Support for Qwen 2.5 Coder 32B.
|
||||
- `/web` command just adds the page to the chat, without triggering an LLM response.
|
||||
- Improved prompting for the user's preferred chat language.
|
||||
- Improved handling of LiteLLM exceptions.
|
||||
- Bugfix for double-counting tokens when reporting cache stats.
|
||||
- Bugfix for the LLM creating new files.
|
||||
- Other small bug fixes.
|
||||
- Aider wrote 55% of the code in this release.
|
||||
|
||||
### Aider v0.62.0
|
||||
|
||||
- Full support for Claude 3.5 Haiku
|
||||
- Scored 75% on [aider's code editing leaderboard](https://aider.chat/docs/leaderboards/).
|
||||
- Almost as good as Sonnet at much lower cost.
|
||||
- Launch with `--haiku` to use it.
|
||||
- Easily apply file edits from ChatGPT, Claude or other web apps
|
||||
- Chat with ChatGPT or Claude via their web app.
|
||||
- Give it your source files and ask for the changes you want.
|
||||
- Use the web app's "copy response" button to copy the entire reply from the LLM.
|
||||
- Run `aider --apply-clipboard-edits file-to-edit.js`.
|
||||
- Aider will edit your file with the LLM's changes.
|
||||
- Bugfix for creating new files.
|
||||
- Aider wrote 84% of the code in this release.
|
||||
|
||||
### Aider v0.61.0
|
||||
|
||||
- Load and save aider slash-commands to files:
|
||||
- `/save <fname>` command will make a file of `/add` and `/read-only` commands that recreate the current file context in the chat.
|
||||
- `/load <fname>` will replay the commands in the file.
|
||||
- You can use `/load` to run any arbitrary set of slash-commands, not just `/add` and `/read-only`.
|
||||
- Use `--load <fname>` to run a list of commands on launch, before the interactive chat begins.
|
||||
- Anonymous, opt-in [analytics](https://aider.chat/docs/more/analytics.html) with no personal data sharing.
|
||||
- Aider follows litellm's `supports_vision` attribute to enable image support for models.
|
||||
- Bugfix for when diff mode flexibly handles the model using the wrong filename.
|
||||
- Displays filenames in sorted order for `/add` and `/read-only`.
|
||||
- New `--no-fancy-input` switch disables prompt toolkit input, now still available with `--no-pretty`.
|
||||
- Override browser config with `--no-browser` or `--no-gui`.
|
||||
- Offer to open documentation URLs when errors occur.
|
||||
- Properly support all o1 models, regardless of provider.
|
||||
- Improved layout of filenames above input prompt.
|
||||
- Better handle corrupted repomap tags cache.
|
||||
- Improved handling of API errors, especially when accessing the weak model.
|
||||
- Aider wrote 68% of the code in this release.
|
||||
|
||||
### Aider v0.60.1
|
||||
|
||||
- Enable image support for Sonnet 10/22.
|
||||
- Display filenames in sorted order.
|
||||
|
||||
### Aider v0.60.0
|
||||
|
||||
- Full support for Sonnet 10/22, the new SOTA model on aider's code editing benchmark.
|
||||
- Aider uses Sonnet 10/22 by default.
|
||||
- Improved formatting of added and read-only files above chat prompt, by @jbellis.
|
||||
- Improved support for o1 models by more flexibly parsing their nonconforming code edit replies.
|
||||
- Corrected diff edit format prompt that only the first match is replaced.
|
||||
- Stronger whole edit format prompt asking for clean file names.
|
||||
- Now offers to add `.env` to the `.gitignore` file.
|
||||
- Ships with a small model metadata json file to handle models not yet updated in litellm.
|
||||
- Model settings for o1 models on azure.
|
||||
- Bugfix to properly include URLs in `/help` RAG results.
|
||||
- Aider wrote 49% of the code in this release.
|
||||
|
||||
### Aider v0.59.1
|
||||
|
||||
- Check for obsolete `yes: true` in yaml config, show helpful error.
|
||||
- Model settings for openrouter/anthropic/claude-3.5-sonnet:beta
|
||||
|
||||
### Aider v0.59.0
|
||||
|
||||
- Improvements to `/read-only`:
|
||||
- Now supports shell-style auto-complete of the full file system.
|
||||
- Still auto-completes the full paths of the repo files like `/add`.
|
||||
- Now supports globs like `src/**/*.py`
|
||||
- Renamed `--yes` to `--yes-always`.
|
||||
- Now uses `AIDER_YES_ALWAYS` env var and `yes-always:` yaml key.
|
||||
- Existing YAML and .env files will need to be updated.
|
||||
- Can still abbreviate to `--yes` on the command line.
|
||||
- Config file now uses standard YAML list syntax with ` - list entries`, one per line.
|
||||
- `/settings` now includes the same announcement lines that would print at launch.
|
||||
- Sanity checks the `--editor-model` on launch now, same as main and weak models.
|
||||
- Added `--skip-sanity-check-repo` switch to speedup launch in large repos.
|
||||
- Bugfix so architect mode handles Control-C properly.
|
||||
- Repo-map is deterministic now, with improved caching logic.
|
||||
- Improved commit message prompt.
|
||||
- Aider wrote 77% of the code in this release.
|
||||
|
||||
### Aider v0.58.1
|
||||
|
||||
- Fixed bug where cache warming pings caused subsequent user messages to trigger a tight loop of LLM requests.
|
||||
|
||||
### Aider v0.58.0
|
||||
|
||||
- [Use a pair of Architect/Editor models for improved coding](https://aider.chat/2024/09/26/architect.html)
|
||||
- Use a strong reasoning model like o1-preview as your Architect.
|
||||
- Use a cheaper, faster model like gpt-4o as your Editor.
|
||||
- New `--o1-preview` and `--o1-mini` shortcuts.
|
||||
- Support for new Gemini 002 models.
|
||||
- Better support for Qwen 2.5 models.
|
||||
- Many confirmation questions can be skipped for the rest of the session with "(D)on't ask again" response.
|
||||
- Autocomplete for `/read-only` supports the entire filesystem.
|
||||
- New settings for completion menu colors.
|
||||
- New `/copy` command to copy the last LLM response to the clipboard.
|
||||
- Renamed `/clipboard` to `/paste`.
|
||||
- Will now follow HTTP redirects when scraping urls.
|
||||
- New `--voice-format` switch to send voice audio as wav/mp3/webm, by @mbailey.
|
||||
- ModelSettings takes `extra_params` dict to specify any extras to pass to `litellm.completion()`.
|
||||
- Support for cursor shapes when in vim mode.
|
||||
- Numerous bug fixes.
|
||||
- Aider wrote 53% of the code in this release.
|
||||
|
||||
### Aider v0.57.1
|
||||
|
||||
- Fixed dependency conflict between aider-chat[help] and [playwright].
|
||||
|
||||
### Aider v0.57.0
|
||||
|
||||
@@ -676,7 +800,7 @@ cog.out(text)
|
||||
### Aider v0.14.0
|
||||
|
||||
- [Support for Claude2 and other LLMs via OpenRouter](https://aider.chat/docs/faq.html#accessing-other-llms-with-openrouter) by @joshuavial
|
||||
- Documentation for [running the aider benchmarking suite](https://github.com/paul-gauthier/aider/tree/main/benchmark)
|
||||
- Documentation for [running the aider benchmarking suite](https://github.com/Aider-AI/aider/tree/main/benchmark)
|
||||
- Aider now requires Python >= 3.9
|
||||
|
||||
|
||||
|
||||
@@ -24,7 +24,7 @@ exclude:
|
||||
|
||||
aux_links:
|
||||
"GitHub":
|
||||
- "https://github.com/paul-gauthier/aider"
|
||||
- "https://github.com/Aider-AI/aider"
|
||||
"Discord":
|
||||
- "https://discord.gg/Tv2uQnR88V"
|
||||
"Blog":
|
||||
@@ -32,11 +32,11 @@ aux_links:
|
||||
|
||||
nav_external_links:
|
||||
- title: "GitHub"
|
||||
url: "https://github.com/paul-gauthier/aider"
|
||||
url: "https://github.com/Aider-AI/aider"
|
||||
- title: "Discord"
|
||||
url: "https://discord.gg/Tv2uQnR88V"
|
||||
|
||||
repository: paul-gauthier/aider
|
||||
repository: Aider-AI/aider
|
||||
|
||||
callouts:
|
||||
tip:
|
||||
|
||||
492
aider/website/_data/architect.yml
Normal file
492
aider/website/_data/architect.yml
Normal file
@@ -0,0 +1,492 @@
|
||||
- dirname: 2024-09-25-21-17-19--architect-sonnet-sonnet-diff
|
||||
test_cases: 133
|
||||
model: claude-3.5-sonnet
|
||||
editor_model: claude-3.5-sonnet
|
||||
editor_edit_format: diff
|
||||
edit_format: architect
|
||||
commit_hash: c18d6a8-dirty
|
||||
pass_rate_1: 62.4
|
||||
pass_rate_2: 80.5
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 3
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 183
|
||||
lazy_comments: 6
|
||||
syntax_errors: 9
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 2
|
||||
command: aider --model openrouter/anthropic/claude-3.5-sonnet
|
||||
date: 2024-09-25
|
||||
versions: 0.57.2.dev
|
||||
seconds_per_case: 25.1
|
||||
total_cost: 4.9502
|
||||
|
||||
- dirname: 2024-07-04-14-32-08--claude-3.5-sonnet-diff-continue
|
||||
test_cases: 133
|
||||
model: claude-3.5-sonnet
|
||||
edit_format: diff
|
||||
commit_hash: 35f21b5
|
||||
pass_rate_1: 57.1
|
||||
pass_rate_2: 77.4
|
||||
percent_cases_well_formed: 99.2
|
||||
error_outputs: 23
|
||||
released: 2024-06-20
|
||||
num_malformed_responses: 4
|
||||
num_with_malformed_responses: 1
|
||||
user_asks: 2
|
||||
lazy_comments: 0
|
||||
syntax_errors: 1
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 1
|
||||
command: aider --sonnet
|
||||
date: 2024-07-04
|
||||
versions: 0.42.1-dev
|
||||
seconds_per_case: 17.6
|
||||
total_cost: 3.6346
|
||||
|
||||
- dirname: 2024-09-25-21-25-01--architect-o1mini-4o-jr-diff
|
||||
test_cases: 133
|
||||
model: o1-mini
|
||||
editor_model: gpt-4o
|
||||
editor_edit_format: diff
|
||||
edit_format: architect
|
||||
commit_hash: 3f682ed-dirty, 25e833b
|
||||
pass_rate_1: 51.1
|
||||
pass_rate_2: 70.7
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 12
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 214
|
||||
lazy_comments: 6
|
||||
syntax_errors: 0
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 1
|
||||
command: aider --model o1-mini
|
||||
date: 2024-09-25
|
||||
versions: 0.57.2.dev
|
||||
seconds_per_case: 23.7
|
||||
total_cost: 9.3158
|
||||
|
||||
- dirname: 2024-09-26-15-05-58--architect-o1mini-deep-jr-whole
|
||||
test_cases: 133
|
||||
model: o1-mini
|
||||
edit_format: architect
|
||||
commit_hash: 1676653-dirty
|
||||
editor_model: deepseek
|
||||
editor_edit_format: whole
|
||||
pass_rate_1: 51.9
|
||||
pass_rate_2: 71.4
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 0
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 199
|
||||
lazy_comments: 11
|
||||
syntax_errors: 0
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 2
|
||||
command: aider --model o1-mini
|
||||
date: 2024-09-26
|
||||
versions: 0.57.2.dev
|
||||
seconds_per_case: 48.2
|
||||
total_cost: 5.6069
|
||||
|
||||
- dirname: 2024-09-25-21-33-40--architect-4o-4o-jr-diff
|
||||
test_cases: 133
|
||||
model: gpt-4o
|
||||
editor_model: gpt-4o
|
||||
editor_edit_format: diff
|
||||
edit_format: architect
|
||||
commit_hash: 9f3cd92
|
||||
pass_rate_1: 56.4
|
||||
pass_rate_2: 75.2
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 13
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 207
|
||||
lazy_comments: 8
|
||||
syntax_errors: 1
|
||||
indentation_errors: 1
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 3
|
||||
command: aider --model gpt-4o
|
||||
date: 2024-09-25
|
||||
versions: 0.57.2.dev
|
||||
seconds_per_case: 18.2
|
||||
total_cost: 6.0918
|
||||
|
||||
- dirname: 2024-09-21-16-45-11--o1-preview-flex-sr-markers
|
||||
test_cases: 133
|
||||
model: o1-preview
|
||||
edit_format: diff
|
||||
commit_hash: 5493654-dirty
|
||||
pass_rate_1: 57.9
|
||||
pass_rate_2: 79.7
|
||||
percent_cases_well_formed: 93.2
|
||||
error_outputs: 11
|
||||
num_malformed_responses: 11
|
||||
num_with_malformed_responses: 9
|
||||
user_asks: 3
|
||||
lazy_comments: 0
|
||||
syntax_errors: 10
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 1
|
||||
command: aider --model o1-preview
|
||||
date: 2024-09-21
|
||||
versions: 0.56.1.dev
|
||||
seconds_per_case: 80.9
|
||||
total_cost: 63.9190
|
||||
|
||||
- dirname: 2024-09-25-21-39-05--architect-o1preview-4o-jr-diff
|
||||
test_cases: 133
|
||||
model: o1-preview
|
||||
editor_model: gpt-4o
|
||||
editor_edit_format: diff
|
||||
edit_format: architect
|
||||
commit_hash: 9f3cd92
|
||||
pass_rate_1: 63.2
|
||||
pass_rate_2: 80.5
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 23
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 191
|
||||
lazy_comments: 2
|
||||
syntax_errors: 0
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 4
|
||||
command: aider --model o1-preview
|
||||
date: 2024-09-25
|
||||
versions: 0.57.2.dev
|
||||
seconds_per_case: 42.3
|
||||
total_cost: 39.3766
|
||||
|
||||
- dirname: 2024-09-25-21-52-42--architect-o1preview-sonnet-jr-diff
|
||||
test_cases: 133
|
||||
model: o1-preview
|
||||
editor_model: claude-3.5-sonnet
|
||||
editor_edit_format: diff
|
||||
edit_format: architect
|
||||
commit_hash: 9f3cd92
|
||||
editor_model: claude-3-5-sonnet
|
||||
pass_rate_1: 60.9
|
||||
pass_rate_2: 82.7
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 1
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 180
|
||||
lazy_comments: 3
|
||||
syntax_errors: 9
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 3
|
||||
command: aider --model o1-preview
|
||||
date: 2024-09-25
|
||||
versions: 0.57.2.dev
|
||||
seconds_per_case: 44.9
|
||||
total_cost: 37.6192
|
||||
|
||||
- dirname: 2024-09-21-16-40-56--o1-mini-flex-sr-markers
|
||||
test_cases: 36
|
||||
model: o1-mini
|
||||
edit_format: diff
|
||||
commit_hash: 5493654
|
||||
pass_rate_1: 50.0
|
||||
pass_rate_2: 61.1
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 0
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 3
|
||||
lazy_comments: 0
|
||||
syntax_errors: 0
|
||||
indentation_errors: 1
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 0
|
||||
command: aider --model o1-mini
|
||||
date: 2024-09-21
|
||||
versions: 0.56.1.dev
|
||||
seconds_per_case: 26.7
|
||||
total_cost: 2.4226
|
||||
|
||||
- dirname: 2024-09-25-23-12-14--architect-o1mini-deep-jr-diff
|
||||
test_cases: 133
|
||||
model: o1-mini
|
||||
edit_format: architect
|
||||
commit_hash: 9f3cd92-dirty
|
||||
editor_model: deepseek
|
||||
editor_edit_format: diff
|
||||
pass_rate_1: 48.9
|
||||
pass_rate_2: 69.2
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 1
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 202
|
||||
lazy_comments: 12
|
||||
syntax_errors: 0
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 2
|
||||
command: aider --model o1-mini
|
||||
date: 2024-09-25
|
||||
versions: 0.57.2.dev
|
||||
seconds_per_case: 52.2
|
||||
total_cost: 5.7927
|
||||
|
||||
- dirname: 2024-09-25-23-18-16--architect-o1preview-deep-jr-diff
|
||||
test_cases: 133
|
||||
model: o1-preview
|
||||
edit_format: architect
|
||||
commit_hash: 9f3cd92-dirty
|
||||
editor_model: deepseek
|
||||
editor_edit_format: diff
|
||||
pass_rate_1: 64.7
|
||||
pass_rate_2: 80.5
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 5
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 180
|
||||
lazy_comments: 2
|
||||
syntax_errors: 0
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 1
|
||||
command: aider --model o1-preview
|
||||
date: 2024-09-25
|
||||
versions: 0.57.2.dev
|
||||
seconds_per_case: 73.2
|
||||
total_cost: 35.7887
|
||||
|
||||
- dirname: 2024-09-25-23-30-36--architect-o1preview-deep-jr-whole
|
||||
test_cases: 133
|
||||
model: o1-preview
|
||||
edit_format: architect
|
||||
commit_hash: 9f3cd92-dirty
|
||||
editor_model: deepseek
|
||||
editor_edit_format: whole
|
||||
pass_rate_1: 63.9
|
||||
pass_rate_2: 85.0
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 0
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 181
|
||||
lazy_comments: 12
|
||||
syntax_errors: 0
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 2
|
||||
command: aider --model o1-preview
|
||||
date: 2024-09-25
|
||||
versions: 0.57.2.dev
|
||||
seconds_per_case: 67.4
|
||||
total_cost: 35.3152
|
||||
|
||||
- dirname: 2024-09-26-15-15-17--architect-sonnet-deep-jr-whole
|
||||
test_cases: 133
|
||||
model: claude-3.5-sonnet
|
||||
edit_format: architect
|
||||
commit_hash: bc1559f-dirty
|
||||
editor_model: deepseek
|
||||
editor_edit_format: whole
|
||||
pass_rate_1: 61.7
|
||||
pass_rate_2: 78.9
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 0
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 184
|
||||
lazy_comments: 5
|
||||
syntax_errors: 9
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 3
|
||||
command: aider --model openrouter/anthropic/claude-3.5-sonnet
|
||||
date: 2024-09-26
|
||||
versions: 0.57.2.dev
|
||||
seconds_per_case: 37.2
|
||||
total_cost: 2.1510
|
||||
|
||||
- dirname: 2024-09-26-15-33-28--costs-gpt4o-diff
|
||||
test_cases: 133
|
||||
model: gpt-4o
|
||||
edit_format: diff
|
||||
commit_hash: 89aa385-dirty
|
||||
pass_rate_1: 55.6
|
||||
pass_rate_2: 71.4
|
||||
percent_cases_well_formed: 97.7
|
||||
error_outputs: 5
|
||||
num_malformed_responses: 5
|
||||
num_with_malformed_responses: 3
|
||||
user_asks: 10
|
||||
lazy_comments: 0
|
||||
syntax_errors: 0
|
||||
indentation_errors: 1
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 0
|
||||
command: aider --model gpt-4o
|
||||
date: 2024-09-26
|
||||
versions: 0.57.2.dev
|
||||
seconds_per_case: 9.7
|
||||
total_cost: 3.8088
|
||||
|
||||
- dirname: 2024-09-26-15-41-08--architect-4o-deep-jr-whole
|
||||
test_cases: 133
|
||||
model: gpt-4o
|
||||
edit_format: architect
|
||||
commit_hash: 89aa385-dirty
|
||||
editor_model: deepseek
|
||||
editor_edit_format: whole
|
||||
pass_rate_1: 60.9
|
||||
pass_rate_2: 73.7
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 0
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 187
|
||||
lazy_comments: 12
|
||||
syntax_errors: 5
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 1
|
||||
command: aider --model gpt-4o
|
||||
date: 2024-09-26
|
||||
versions: 0.57.2.dev
|
||||
seconds_per_case: 38.0
|
||||
total_cost: 2.4737
|
||||
|
||||
- dirname: 2024-09-26-15-54-08--architect-4o-deep-jr-diff
|
||||
test_cases: 133
|
||||
model: gpt-4o
|
||||
edit_format: architect
|
||||
commit_hash: 89aa385-dirty
|
||||
editor_model: deepseek
|
||||
editor_edit_format: diff
|
||||
pass_rate_1: 57.1
|
||||
pass_rate_2: 74.4
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 4
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 192
|
||||
lazy_comments: 6
|
||||
syntax_errors: 0
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 2
|
||||
command: aider --model gpt-4o
|
||||
date: 2024-09-26
|
||||
versions: 0.57.2.dev
|
||||
seconds_per_case: 44.0
|
||||
total_cost: 2.5498
|
||||
|
||||
- dirname: 2024-09-26-16-06-39--architect-sonnet-deep-jr-diff
|
||||
test_cases: 133
|
||||
model: claude-3.5-sonnet
|
||||
edit_format: architect
|
||||
commit_hash: 89aa385-dirty
|
||||
editor_model: deepseek
|
||||
editor_edit_format: diff
|
||||
pass_rate_1: 61.7
|
||||
pass_rate_2: 78.9
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 2
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 184
|
||||
lazy_comments: 2
|
||||
syntax_errors: 9
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 2
|
||||
command: aider --model openrouter/anthropic/claude-3.5-sonnet
|
||||
date: 2024-09-26
|
||||
versions: 0.57.2.dev
|
||||
seconds_per_case: 43.2
|
||||
total_cost: 2.1488
|
||||
|
||||
- dirname: 2024-09-27-18-15-32--architect-4omini-4omini
|
||||
test_cases: 133
|
||||
model: gpt-4o-mini
|
||||
edit_format: architect
|
||||
commit_hash: 0bd8058-dirty
|
||||
editor_model: gpt-4o-mini
|
||||
editor_edit_format: whole
|
||||
pass_rate_1: 43.6
|
||||
pass_rate_2: 60.2
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 0
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 208
|
||||
lazy_comments: 2
|
||||
syntax_errors: 0
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 3
|
||||
command: aider --model gpt-4o-mini
|
||||
date: 2024-09-27
|
||||
versions: 0.57.2.dev
|
||||
seconds_per_case: 21.0
|
||||
total_cost: 0.1527
|
||||
|
||||
- dirname: 2024-07-18-18-57-46--gpt-4o-mini-whole
|
||||
test_cases: 133
|
||||
model: gpt-4o-mini
|
||||
edit_format: whole
|
||||
commit_hash: d31eef3-dirty
|
||||
pass_rate_1: 40.6
|
||||
pass_rate_2: 55.6
|
||||
released: 2024-07-18
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 1
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 1
|
||||
lazy_comments: 0
|
||||
syntax_errors: 1
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 2
|
||||
command: aider --model gpt-4o-mini
|
||||
date: 2024-07-18
|
||||
versions: 0.44.1-dev
|
||||
seconds_per_case: 7.8
|
||||
total_cost: 0.0916
|
||||
|
||||
- dirname: 2024-09-29-22-35-36--architect-o1preview-o1mini-whole
|
||||
test_cases: 133
|
||||
model: o1-preview
|
||||
edit_format: architect
|
||||
commit_hash: 53ca83b
|
||||
editor_model: o1-mini
|
||||
editor_edit_format: whole
|
||||
pass_rate_1: 65.4
|
||||
pass_rate_2: 85.0
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 0
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 179
|
||||
lazy_comments: 4
|
||||
syntax_errors: 0
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 1
|
||||
command: aider --model o1-preview
|
||||
date: 2024-09-29
|
||||
versions: 0.58.1.dev
|
||||
seconds_per_case: 39.7
|
||||
total_cost: 36.2078
|
||||
@@ -2531,3 +2531,451 @@
|
||||
fry69: 15
|
||||
start_tag: v0.55.0
|
||||
total_lines: 277
|
||||
- aider_percentage: 69.98
|
||||
aider_total: 394
|
||||
end_date: '2024-09-21'
|
||||
end_tag: v0.57.0
|
||||
file_counts:
|
||||
aider/__init__.py:
|
||||
Paul Gauthier: 1
|
||||
aider/args_formatter.py:
|
||||
Paul Gauthier: 4
|
||||
Paul Gauthier (aider): 1
|
||||
aider/coders/base_coder.py:
|
||||
Krazer: 1
|
||||
Paul Gauthier: 17
|
||||
Paul Gauthier (aider): 2
|
||||
aider/coders/chat_chunks.py:
|
||||
Paul Gauthier: 5
|
||||
aider/coders/editblock_coder.py:
|
||||
Paul Gauthier (aider): 27
|
||||
aider/commands.py:
|
||||
Krazer: 3
|
||||
Paul Gauthier: 1
|
||||
Paul Gauthier (aider): 34
|
||||
aider/io.py:
|
||||
Krazer: 27
|
||||
Paul Gauthier: 8
|
||||
Paul Gauthier (aider): 42
|
||||
aider/main.py:
|
||||
Krazer: 2
|
||||
Paul Gauthier: 5
|
||||
Paul Gauthier (aider): 8
|
||||
aider/models.py:
|
||||
Jay Alammar: 1
|
||||
Jay Alammar (aider): 13
|
||||
Paul Gauthier: 43
|
||||
Paul Gauthier (aider): 46
|
||||
aider/repo.py:
|
||||
Paul Gauthier: 3
|
||||
aider/run_cmd.py:
|
||||
Paul Gauthier: 8
|
||||
Paul Gauthier (aider): 33
|
||||
aider/sendchat.py:
|
||||
Paul Gauthier: 3
|
||||
aider/utils.py:
|
||||
Paul Gauthier: 2
|
||||
benchmark/benchmark.py:
|
||||
Paul Gauthier: 4
|
||||
scripts/issues.py:
|
||||
Paul Gauthier: 10
|
||||
Paul Gauthier (aider): 123
|
||||
scripts/versionbump.py:
|
||||
Paul Gauthier (aider): 8
|
||||
tests/basic/test_coder.py:
|
||||
Paul Gauthier: 1
|
||||
tests/basic/test_editblock.py:
|
||||
Christian Clauss: 2
|
||||
tests/basic/test_io.py:
|
||||
Paul Gauthier (aider): 37
|
||||
tests/basic/test_main.py:
|
||||
Paul Gauthier: 18
|
||||
Paul Gauthier (aider): 20
|
||||
grand_total:
|
||||
Christian Clauss: 2
|
||||
Jay Alammar: 1
|
||||
Jay Alammar (aider): 13
|
||||
Krazer: 33
|
||||
Paul Gauthier: 133
|
||||
Paul Gauthier (aider): 381
|
||||
start_tag: v0.56.0
|
||||
total_lines: 563
|
||||
- aider_percentage: 53.45
|
||||
aider_total: 712
|
||||
end_date: '2024-09-29'
|
||||
end_tag: v0.58.0
|
||||
file_counts:
|
||||
.github/workflows/docker-build-test.yml:
|
||||
Paul Gauthier: 1
|
||||
Paul Gauthier (aider): 11
|
||||
aider/__init__.py:
|
||||
Paul Gauthier: 1
|
||||
aider/args.py:
|
||||
Paul Gauthier: 8
|
||||
Paul Gauthier (aider): 109
|
||||
Stein Martin Hustad: 17
|
||||
fry69: 2
|
||||
aider/coders/__init__.py:
|
||||
Paul Gauthier: 6
|
||||
Paul Gauthier (aider): 2
|
||||
aider/coders/architect_coder.py:
|
||||
Paul Gauthier: 40
|
||||
Paul Gauthier (aider): 3
|
||||
aider/coders/base_coder.py:
|
||||
Jonathan Ellis: 1
|
||||
Paul Gauthier: 32
|
||||
Paul Gauthier (aider): 8
|
||||
aider/coders/editor_editblock_coder.py:
|
||||
Paul Gauthier: 6
|
||||
Paul Gauthier (aider): 1
|
||||
aider/coders/editor_whole_coder.py:
|
||||
Paul Gauthier: 7
|
||||
aider/coders/wholefile_coder.py:
|
||||
Paul Gauthier: 2
|
||||
aider/commands.py:
|
||||
Jonathan Ellis: 1
|
||||
Mike Bailey: 1
|
||||
Paul Gauthier: 20
|
||||
Paul Gauthier (aider): 78
|
||||
fry69: 2
|
||||
aider/help.py:
|
||||
Paul Gauthier: 27
|
||||
Paul Gauthier (aider): 7
|
||||
aider/history.py:
|
||||
Paul Gauthier: 1
|
||||
aider/io.py:
|
||||
Paul Gauthier: 39
|
||||
Paul Gauthier (aider): 62
|
||||
Stein Martin Hustad: 5
|
||||
fry69: 10
|
||||
aider/linter.py:
|
||||
Paul Gauthier: 6
|
||||
aider/main.py:
|
||||
Paul Gauthier: 13
|
||||
Paul Gauthier (aider): 6
|
||||
Stein Martin Hustad: 4
|
||||
fry69: 1
|
||||
rti: 1
|
||||
aider/models.py:
|
||||
Paul Gauthier: 58
|
||||
Paul Gauthier (aider): 85
|
||||
aider/repo.py:
|
||||
Paul Gauthier: 16
|
||||
Paul Gauthier (aider): 2
|
||||
aider/repomap.py:
|
||||
Paul Gauthier: 5
|
||||
aider/scrape.py:
|
||||
Paul Gauthier (aider): 3
|
||||
aider/sendchat.py:
|
||||
Paul Gauthier: 1
|
||||
Paul Gauthier (aider): 5
|
||||
aider/utils.py:
|
||||
Paul Gauthier: 4
|
||||
aider/versioncheck.py:
|
||||
Paul Gauthier: 2
|
||||
aider/voice.py:
|
||||
Mike Bailey: 17
|
||||
Paul Gauthier: 2
|
||||
Paul Gauthier (aider): 10
|
||||
benchmark/benchmark.py:
|
||||
Paul Gauthier: 25
|
||||
Paul Gauthier (aider): 29
|
||||
fry69: 3
|
||||
scripts/issues.py:
|
||||
Paul Gauthier: 5
|
||||
Paul Gauthier (aider): 45
|
||||
scripts/update-docs.sh:
|
||||
Paul Gauthier: 1
|
||||
scripts/yank-old-versions.py:
|
||||
Paul Gauthier (aider): 51
|
||||
tests/basic/test_commands.py:
|
||||
Paul Gauthier: 2
|
||||
Paul Gauthier (aider): 98
|
||||
tests/basic/test_io.py:
|
||||
Paul Gauthier: 2
|
||||
Paul Gauthier (aider): 97
|
||||
tests/basic/test_main.py:
|
||||
Paul Gauthier: 2
|
||||
tests/basic/test_models.py:
|
||||
Paul Gauthier: 4
|
||||
tests/basic/test_sanity_check_repo.py:
|
||||
fry69: 179
|
||||
tests/basic/test_wholefile.py:
|
||||
Paul Gauthier: 38
|
||||
grand_total:
|
||||
Jonathan Ellis: 2
|
||||
Mike Bailey: 18
|
||||
Paul Gauthier: 376
|
||||
Paul Gauthier (aider): 712
|
||||
Stein Martin Hustad: 26
|
||||
fry69: 197
|
||||
rti: 1
|
||||
start_tag: v0.57.0
|
||||
total_lines: 1332
|
||||
- aider_percentage: 76.79
|
||||
aider_total: 172
|
||||
end_date: '2024-10-04'
|
||||
end_tag: v0.59.0
|
||||
file_counts:
|
||||
aider/__init__.py:
|
||||
Paul Gauthier: 1
|
||||
aider/args.py:
|
||||
Paul Gauthier: 2
|
||||
Paul Gauthier (aider): 6
|
||||
aider/args_formatter.py:
|
||||
Paul Gauthier: 4
|
||||
aider/coders/architect_coder.py:
|
||||
Paul Gauthier: 1
|
||||
aider/coders/base_coder.py:
|
||||
Paul Gauthier: 6
|
||||
aider/coders/editblock_coder.py:
|
||||
Paul Gauthier: 1
|
||||
aider/commands.py:
|
||||
Paul Gauthier: 3
|
||||
Paul Gauthier (aider): 49
|
||||
aider/gui.py:
|
||||
Paul Gauthier: 2
|
||||
aider/main.py:
|
||||
Paul Gauthier: 10
|
||||
Paul Gauthier (aider): 4
|
||||
aider/models.py:
|
||||
Paul Gauthier (aider): 12
|
||||
aider/repomap.py:
|
||||
Paul Gauthier: 9
|
||||
Paul Gauthier (aider): 3
|
||||
aider/urls.py:
|
||||
Paul Gauthier: 2
|
||||
aider/versioncheck.py:
|
||||
Paul Gauthier: 1
|
||||
scripts/issues.py:
|
||||
Paul Gauthier: 1
|
||||
scripts/update-docs.sh:
|
||||
Paul Gauthier: 2
|
||||
tests/basic/test_commands.py:
|
||||
Paul Gauthier: 4
|
||||
Paul Gauthier (aider): 80
|
||||
tests/basic/test_models.py:
|
||||
Paul Gauthier: 1
|
||||
Paul Gauthier (aider): 18
|
||||
tests/basic/test_sanity_check_repo.py:
|
||||
Paul Gauthier: 1
|
||||
tests/help/test_help.py:
|
||||
Paul Gauthier: 1
|
||||
grand_total:
|
||||
Paul Gauthier: 52
|
||||
Paul Gauthier (aider): 172
|
||||
start_tag: v0.58.0
|
||||
total_lines: 224
|
||||
- aider_percentage: 49.12
|
||||
aider_total: 140
|
||||
end_date: '2024-10-22'
|
||||
end_tag: v0.60.0
|
||||
file_counts:
|
||||
.github/workflows/close-stale.yml:
|
||||
Paul Gauthier: 5
|
||||
Paul Gauthier (aider): 19
|
||||
.github/workflows/pages.yml:
|
||||
Paul Gauthier: 3
|
||||
aider/__init__.py:
|
||||
Paul Gauthier: 1
|
||||
aider/args.py:
|
||||
Paul Gauthier: 1
|
||||
fry69: 2
|
||||
aider/coders/base_coder.py:
|
||||
Paul Gauthier: 2
|
||||
aider/coders/editblock_coder.py:
|
||||
Paul Gauthier (aider): 3
|
||||
aider/commands.py:
|
||||
Paul Gauthier: 1
|
||||
aider/help.py:
|
||||
Paul Gauthier: 1
|
||||
Paul Gauthier (aider): 33
|
||||
aider/io.py:
|
||||
Jonathan Ellis: 10
|
||||
Paul Gauthier: 7
|
||||
aider/main.py:
|
||||
Paul Gauthier: 20
|
||||
Paul Gauthier (aider): 39
|
||||
aider/models.py:
|
||||
Paul Gauthier: 18
|
||||
Sven Grunewaldt: 24
|
||||
fry69: 16
|
||||
aider/resources/__init__.py:
|
||||
Paul Gauthier: 3
|
||||
aider/sendchat.py:
|
||||
Paul Gauthier: 3
|
||||
tests/basic/test_editblock.py:
|
||||
Paul Gauthier: 23
|
||||
tests/basic/test_main.py:
|
||||
Paul Gauthier: 1
|
||||
tests/help/test_help.py:
|
||||
Paul Gauthier: 4
|
||||
Paul Gauthier (aider): 46
|
||||
grand_total:
|
||||
Jonathan Ellis: 10
|
||||
Paul Gauthier: 93
|
||||
Paul Gauthier (aider): 140
|
||||
Sven Grunewaldt: 24
|
||||
fry69: 18
|
||||
start_tag: v0.59.0
|
||||
total_lines: 285
|
||||
- aider_percentage: 67.61
|
||||
aider_total: 860
|
||||
end_date: '2024-11-01'
|
||||
end_tag: v0.61.0
|
||||
file_counts:
|
||||
aider/__init__.py:
|
||||
Paul Gauthier: 1
|
||||
aider/analytics.py:
|
||||
Paul Gauthier: 75
|
||||
Paul Gauthier (aider): 89
|
||||
aider/args.py:
|
||||
Paul Gauthier: 5
|
||||
Paul Gauthier (aider): 29
|
||||
aider/coders/base_coder.py:
|
||||
Paul Gauthier: 56
|
||||
Paul Gauthier (aider): 43
|
||||
aider/coders/editblock_coder.py:
|
||||
Paul Gauthier: 14
|
||||
aider/commands.py:
|
||||
Paul Gauthier: 14
|
||||
Paul Gauthier (aider): 86
|
||||
aider/io.py:
|
||||
Paul Gauthier: 12
|
||||
Paul Gauthier (aider): 32
|
||||
aider/linter.py:
|
||||
Paul Gauthier: 6
|
||||
aider/main.py:
|
||||
Paul Gauthier: 48
|
||||
Paul Gauthier (aider): 10
|
||||
aider/models.py:
|
||||
Paul Gauthier: 54
|
||||
Paul Gauthier (aider): 63
|
||||
kAIto47802: 4
|
||||
aider/repomap.py:
|
||||
Paul Gauthier: 12
|
||||
Paul Gauthier (aider): 52
|
||||
aider/sendchat.py:
|
||||
Paul Gauthier: 23
|
||||
Paul Gauthier (aider): 23
|
||||
aider/urls.py:
|
||||
Paul Gauthier: 2
|
||||
aider/utils.py:
|
||||
Paul Gauthier (aider): 6
|
||||
scripts/issues.py:
|
||||
Paul Gauthier (aider): 13
|
||||
scripts/pip-compile.sh:
|
||||
Paul Gauthier (aider): 13
|
||||
scripts/update-docs.sh:
|
||||
Paul Gauthier: 1
|
||||
Paul Gauthier (aider): 5
|
||||
tests/basic/test_analytics.py:
|
||||
Paul Gauthier: 1
|
||||
Paul Gauthier (aider): 99
|
||||
tests/basic/test_commands.py:
|
||||
Konstantin L: 34
|
||||
Paul Gauthier: 45
|
||||
Paul Gauthier (aider): 267
|
||||
tests/basic/test_io.py:
|
||||
Paul Gauthier: 2
|
||||
Paul Gauthier (aider): 4
|
||||
tests/basic/test_main.py:
|
||||
Paul Gauthier (aider): 3
|
||||
tests/basic/test_models.py:
|
||||
Paul Gauthier: 3
|
||||
Paul Gauthier (aider): 9
|
||||
tests/basic/test_sanity_check_repo.py:
|
||||
Paul Gauthier (aider): 6
|
||||
tests/basic/test_sendchat.py:
|
||||
Paul Gauthier (aider): 8
|
||||
grand_total:
|
||||
Konstantin L: 34
|
||||
Paul Gauthier: 374
|
||||
Paul Gauthier (aider): 860
|
||||
kAIto47802: 4
|
||||
start_tag: v0.60.0
|
||||
total_lines: 1272
|
||||
- aider_percentage: 84.0
|
||||
aider_total: 63
|
||||
end_date: '2024-11-04'
|
||||
end_tag: v0.62.0
|
||||
file_counts:
|
||||
aider/__init__.py:
|
||||
Paul Gauthier: 1
|
||||
aider/args.py:
|
||||
Paul Gauthier (aider): 14
|
||||
aider/coders/editblock_coder.py:
|
||||
Paul Gauthier: 6
|
||||
aider/main.py:
|
||||
Paul Gauthier (aider): 4
|
||||
aider/models.py:
|
||||
Paul Gauthier: 5
|
||||
Paul Gauthier (aider): 45
|
||||
grand_total:
|
||||
Paul Gauthier: 12
|
||||
Paul Gauthier (aider): 63
|
||||
start_tag: v0.61.0
|
||||
total_lines: 75
|
||||
- aider_percentage: 55.16
|
||||
aider_total: 385
|
||||
end_date: '2024-11-13'
|
||||
end_tag: v0.63.0
|
||||
file_counts:
|
||||
aider/__init__.py:
|
||||
Paul Gauthier: 1
|
||||
aider/coders/architect_coder.py:
|
||||
Paul Gauthier: 3
|
||||
aider/coders/base_coder.py:
|
||||
Paul Gauthier: 42
|
||||
Paul Gauthier (aider): 1
|
||||
aider/coders/editblock_coder.py:
|
||||
Paul Gauthier: 4
|
||||
aider/commands.py:
|
||||
Paul Gauthier: 13
|
||||
aider/exceptions.py:
|
||||
Paul Gauthier: 72
|
||||
Paul Gauthier (aider): 4
|
||||
aider/io.py:
|
||||
Paul Gauthier: 3
|
||||
Paul Gauthier (aider): 23
|
||||
aider/main.py:
|
||||
Paul Gauthier: 9
|
||||
Paul Gauthier (aider): 9
|
||||
aider/models.py:
|
||||
Logan Attwood: 29
|
||||
Paul Gauthier: 50
|
||||
Paul Gauthier (aider): 7
|
||||
aider/repo.py:
|
||||
Paul Gauthier: 7
|
||||
aider/repomap.py:
|
||||
Paul Gauthier: 4
|
||||
aider/sendchat.py:
|
||||
Paul Gauthier: 17
|
||||
Paul Gauthier (aider): 4
|
||||
scripts/issues.py:
|
||||
Paul Gauthier: 4
|
||||
Paul Gauthier (aider): 195
|
||||
tests/basic/test_coder.py:
|
||||
Paul Gauthier: 2
|
||||
tests/basic/test_commands.py:
|
||||
Paul Gauthier (aider): 20
|
||||
tests/basic/test_editblock.py:
|
||||
Paul Gauthier: 41
|
||||
tests/basic/test_exceptions.py:
|
||||
Paul Gauthier (aider): 65
|
||||
tests/basic/test_main.py:
|
||||
Paul Gauthier: 1
|
||||
tests/basic/test_sanity_check_repo.py:
|
||||
Paul Gauthier: 2
|
||||
Paul Gauthier (aider): 2
|
||||
tests/basic/test_sendchat.py:
|
||||
Paul Gauthier: 8
|
||||
Paul Gauthier (aider): 55
|
||||
tests/scrape/test_scrape.py:
|
||||
Paul Gauthier: 1
|
||||
grand_total:
|
||||
Logan Attwood: 29
|
||||
Paul Gauthier: 284
|
||||
Paul Gauthier (aider): 385
|
||||
start_tag: v0.62.0
|
||||
total_lines: 698
|
||||
|
||||
@@ -20,7 +20,7 @@
|
||||
versions: 0.30.2-dev
|
||||
seconds_per_case: 32.4
|
||||
total_cost: 13.8395
|
||||
|
||||
|
||||
- dirname: 2024-03-06-16-42-00--claude3-sonnet-whole
|
||||
test_cases: 133
|
||||
model: claude-3-sonnet-20240229
|
||||
@@ -43,7 +43,7 @@
|
||||
versions: 0.25.1-dev
|
||||
seconds_per_case: 23.1
|
||||
total_cost: 0.0000
|
||||
|
||||
|
||||
- dirname: 2024-05-03-20-47-24--gemini-1.5-pro-diff-fenced
|
||||
test_cases: 133
|
||||
model: gemini-1.5-pro-latest
|
||||
@@ -88,7 +88,7 @@
|
||||
versions: 0.33.1-dev
|
||||
seconds_per_case: 6.5
|
||||
total_cost: 0.5032
|
||||
|
||||
|
||||
- dirname: 2023-11-06-21-23-59--gpt-3.5-turbo-0301
|
||||
test_cases: 133
|
||||
model: gpt-3.5-turbo-0301
|
||||
@@ -111,7 +111,7 @@
|
||||
versions: 0.16.4-dev
|
||||
seconds_per_case: 6.5
|
||||
total_cost: 0.4822
|
||||
|
||||
|
||||
- dirname: 2023-11-07-02-41-07--gpt-3.5-turbo-0613
|
||||
test_cases: 133
|
||||
model: gpt-3.5-turbo-0613
|
||||
@@ -155,7 +155,7 @@
|
||||
versions: 0.30.2-dev
|
||||
seconds_per_case: 5.3
|
||||
total_cost: 0.3261
|
||||
|
||||
|
||||
- dirname: 2024-01-25-23-37-15--jan-exercism-gpt-4-0125-preview-udiff
|
||||
test_cases: 133
|
||||
model: gpt-4-0125-preview
|
||||
@@ -178,7 +178,7 @@
|
||||
versions: 0.22.1-dev
|
||||
seconds_per_case: 44.8
|
||||
total_cost: 14.6428
|
||||
|
||||
|
||||
- dirname: 2024-05-04-15-07-30--redo-gpt-4-0314-diff-reminder-rules
|
||||
test_cases: 133
|
||||
model: gpt-4-0314
|
||||
@@ -201,7 +201,7 @@
|
||||
versions: 0.31.2-dev
|
||||
seconds_per_case: 19.8
|
||||
total_cost: 16.2689
|
||||
|
||||
|
||||
- dirname: 2023-12-16-21-24-28--editblock-gpt-4-0613-actual-main
|
||||
test_cases: 133
|
||||
model: gpt-4-0613
|
||||
@@ -228,7 +228,7 @@
|
||||
- dirname: 2024-05-08-21-16-03--may-gpt-4-1106-preview-udiff
|
||||
test_cases: 133
|
||||
model: gpt-4-1106-preview
|
||||
released: 2023-11-06
|
||||
released: 2023-11-06
|
||||
edit_format: udiff
|
||||
commit_hash: 87664dc
|
||||
pass_rate_1: 51.9
|
||||
@@ -247,7 +247,7 @@
|
||||
versions: 0.33.1-dev
|
||||
seconds_per_case: 20.4
|
||||
total_cost: 6.6061
|
||||
|
||||
|
||||
- dirname: 2024-05-01-02-09-20--gpt-4-turbo-examples
|
||||
test_cases: 133
|
||||
model: gpt-4-turbo-2024-04-09 (udiff)
|
||||
@@ -270,7 +270,7 @@
|
||||
versions: 0.30.2-dev
|
||||
seconds_per_case: 22.8
|
||||
total_cost: 6.3337
|
||||
|
||||
|
||||
- dirname: 2024-05-03-22-24-48--openrouter--llama3-diff-examples-sys-msg
|
||||
test_cases: 132
|
||||
model: llama3-70b-8192
|
||||
@@ -293,7 +293,7 @@
|
||||
versions: 0.31.2-dev
|
||||
seconds_per_case: 14.5
|
||||
total_cost: 0.4311
|
||||
|
||||
|
||||
- dirname: 2024-05-06-18-31-08--command-r-plus-whole-final
|
||||
test_cases: 133
|
||||
model: command-r-plus
|
||||
@@ -316,11 +316,11 @@
|
||||
versions: 0.31.2-dev
|
||||
seconds_per_case: 22.9
|
||||
total_cost: 2.7494
|
||||
|
||||
|
||||
- dirname: 2024-05-07-20-32-37--qwen1.5-110b-chat-whole
|
||||
test_cases: 133
|
||||
model: qwen1.5-110b-chat
|
||||
released: 2024-02-04
|
||||
released: 2024-02-04
|
||||
edit_format: whole
|
||||
commit_hash: 70b1c0c
|
||||
pass_rate_1: 30.8
|
||||
@@ -339,7 +339,7 @@
|
||||
versions: 0.31.2-dev
|
||||
seconds_per_case: 46.9
|
||||
total_cost: 0.0000
|
||||
|
||||
|
||||
- dirname: 2024-05-07-20-57-04--wizardlm-2-8x22b-whole
|
||||
test_cases: 133
|
||||
model: WizardLM-2 8x22B
|
||||
@@ -547,7 +547,7 @@
|
||||
|
||||
- dirname: 2024-07-04-14-32-08--claude-3.5-sonnet-diff-continue
|
||||
test_cases: 133
|
||||
model: claude-3.5-sonnet
|
||||
model: claude-3.5-sonnet-20240620
|
||||
edit_format: diff
|
||||
commit_hash: 35f21b5
|
||||
pass_rate_1: 57.1
|
||||
@@ -563,12 +563,12 @@
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 1
|
||||
command: aider --sonnet
|
||||
command: aider --model claude-3.5-sonnet-20240620
|
||||
date: 2024-07-04
|
||||
versions: 0.42.1-dev
|
||||
seconds_per_case: 17.6
|
||||
total_cost: 3.6346
|
||||
|
||||
|
||||
- dirname: 2024-07-01-21-41-48--haiku-whole
|
||||
test_cases: 133
|
||||
model: claude-3-haiku-20240307
|
||||
@@ -832,30 +832,6 @@
|
||||
seconds_per_case: 6.5
|
||||
total_cost: 0.0000
|
||||
|
||||
- dirname: 2024-08-14-13-07-12--chatgpt-4o-latest-diff
|
||||
test_cases: 133
|
||||
model: chatgpt-4o-latest
|
||||
edit_format: diff
|
||||
commit_hash: b1c3769
|
||||
pass_rate_1: 53.4
|
||||
pass_rate_2: 69.2
|
||||
percent_cases_well_formed: 97.7
|
||||
error_outputs: 27
|
||||
num_malformed_responses: 5
|
||||
num_with_malformed_responses: 3
|
||||
user_asks: 7
|
||||
lazy_comments: 0
|
||||
syntax_errors: 0
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 0
|
||||
command: aider --model openai/chatgpt-4o-latest
|
||||
date: 2024-08-14
|
||||
released: 2024-08-08
|
||||
versions: 0.50.2-dev
|
||||
seconds_per_case: 26.3
|
||||
total_cost: 3.6113
|
||||
|
||||
- dirname: 2024-08-28-07-10-50--gemini-1.5-pro-exp-0827-diff-fenced
|
||||
test_cases: 133
|
||||
model: gemini-1.5-pro-exp-0827
|
||||
@@ -1110,28 +1086,28 @@
|
||||
seconds_per_case: 103.0
|
||||
total_cost: 5.3725
|
||||
|
||||
- dirname: 2024-09-12-20-56-22--o1-mini-diff
|
||||
test_cases: 133
|
||||
model: o1-mini (diff)
|
||||
- dirname: 2024-09-21-16-40-56--o1-mini-flex-sr-markers
|
||||
test_cases: 36
|
||||
model: o1-mini
|
||||
edit_format: diff
|
||||
commit_hash: 4598a37-dirty, 291b456, 752e823-dirty
|
||||
pass_rate_1: 45.1
|
||||
pass_rate_2: 62.4
|
||||
percent_cases_well_formed: 85.7
|
||||
error_outputs: 26
|
||||
num_malformed_responses: 26
|
||||
num_with_malformed_responses: 19
|
||||
user_asks: 2
|
||||
commit_hash: 5493654
|
||||
pass_rate_1: 50.0
|
||||
pass_rate_2: 61.1
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 0
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 3
|
||||
lazy_comments: 0
|
||||
syntax_errors: 0
|
||||
indentation_errors: 0
|
||||
indentation_errors: 1
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 1
|
||||
command: aider --model o1-mini --edit-format diff
|
||||
date: 2024-09-12
|
||||
test_timeouts: 0
|
||||
command: aider --model o1-mini
|
||||
date: 2024-09-21
|
||||
versions: 0.56.1.dev
|
||||
seconds_per_case: 177.7
|
||||
total_cost: 11.1071
|
||||
seconds_per_case: 26.7
|
||||
total_cost: 2.4226
|
||||
|
||||
- dirname: 2024-09-21-16-45-11--o1-preview-flex-sr-markers
|
||||
test_cases: 133
|
||||
@@ -1155,7 +1131,7 @@
|
||||
versions: 0.56.1.dev
|
||||
seconds_per_case: 80.9
|
||||
total_cost: 63.9190
|
||||
|
||||
|
||||
- dirname: 2024-09-19-16-58-29--qwen2.5-coder:7b-instruct-q8_0
|
||||
test_cases: 133
|
||||
model: qwen2.5-coder:7b-instruct-q8_0
|
||||
@@ -1178,7 +1154,7 @@
|
||||
versions: 0.56.0
|
||||
seconds_per_case: 9.3
|
||||
total_cost: 0.0000
|
||||
|
||||
|
||||
- dirname: 2024-09-20-20-20-19--qwen-2.5-72b-instruct-diff
|
||||
test_cases: 133
|
||||
model: qwen-2.5-72b-instruct (bf16)
|
||||
@@ -1200,4 +1176,645 @@
|
||||
date: 2024-09-20
|
||||
versions: 0.56.1.dev
|
||||
seconds_per_case: 39.8
|
||||
total_cost: 0.0000
|
||||
total_cost: 0.0000
|
||||
|
||||
- dirname: 2024-09-21-11-56-43--Codestral-22B-v0.1-Q4_K_M.gguf_whole
|
||||
test_cases: 133
|
||||
model: Codestral-22B-v0.1-Q4_K_M
|
||||
edit_format: whole
|
||||
commit_hash: 2753ac6-dirty
|
||||
pass_rate_1: 36.1
|
||||
pass_rate_2: 48.1
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 0
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 8
|
||||
lazy_comments: 6
|
||||
syntax_errors: 0
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 4
|
||||
command: aider --model Codestral-22B-v0.1-Q4_K_M
|
||||
date: 2024-09-21
|
||||
versions: 0.56.1.dev
|
||||
seconds_per_case: 656.4
|
||||
total_cost: 0.9108
|
||||
|
||||
- dirname: 2024-09-24-16-26-45--gemini-1.5-pro-002-diff-fenced
|
||||
test_cases: 133
|
||||
model: gemini-1.5-pro-002
|
||||
edit_format: diff-fenced
|
||||
commit_hash: 6b5fe9b, 3edcd71
|
||||
pass_rate_1: 49.6
|
||||
pass_rate_2: 65.4
|
||||
percent_cases_well_formed: 96.2
|
||||
error_outputs: 17
|
||||
num_malformed_responses: 17
|
||||
num_with_malformed_responses: 5
|
||||
user_asks: 3
|
||||
lazy_comments: 0
|
||||
syntax_errors: 2
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 4
|
||||
command: aider --model gemini/gemini-1.5-pro-002
|
||||
date: 2024-09-24
|
||||
versions: 0.57.2.dev
|
||||
seconds_per_case: 11.6
|
||||
total_cost: 2.8166
|
||||
|
||||
- dirname: 2024-09-24-16-33-23--gemini-1.5-flash-002-whole
|
||||
test_cases: 133
|
||||
model: gemini-1.5-flash-002
|
||||
edit_format: whole
|
||||
commit_hash: 3edcd71
|
||||
pass_rate_1: 37.6
|
||||
pass_rate_2: 51.1
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 0
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 3
|
||||
lazy_comments: 0
|
||||
syntax_errors: 1
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 2
|
||||
command: aider --model gemini/gemini-1.5-flash-002
|
||||
date: 2024-09-24
|
||||
versions: 0.57.2.dev
|
||||
seconds_per_case: 5.1
|
||||
total_cost: 0.0515
|
||||
|
||||
- dirname: 2024-09-24-15-18-59--gemini-1.5-flash-8b-exp-0924-whole
|
||||
test_cases: 133
|
||||
model: gemini-1.5-flash-8b-exp-0924
|
||||
edit_format: whole
|
||||
commit_hash: 86faaa6
|
||||
pass_rate_1: 33.1
|
||||
pass_rate_2: 38.3
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 0
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 9
|
||||
lazy_comments: 6
|
||||
syntax_errors: 8
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 1
|
||||
command: aider --model gemini/gemini-1.5-flash-8b-exp-0924
|
||||
date: 2024-09-24
|
||||
versions: 0.57.2.dev
|
||||
seconds_per_case: 6.6
|
||||
total_cost: 0.0000
|
||||
|
||||
- dirname: 2024-09-28-18-30-20--codestral-whole
|
||||
test_cases: 133
|
||||
model: ollama/codestral
|
||||
edit_format: whole
|
||||
commit_hash: 1971285-dirty
|
||||
pass_rate_1: 33.8
|
||||
pass_rate_2: 45.9
|
||||
percent_cases_well_formed: 98.5
|
||||
error_outputs: 8
|
||||
num_malformed_responses: 8
|
||||
num_with_malformed_responses: 2
|
||||
user_asks: 12
|
||||
lazy_comments: 6
|
||||
syntax_errors: 5
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 4
|
||||
command: aider --model ollama/codestral
|
||||
date: 2024-09-28
|
||||
versions: 0.57.2.dev
|
||||
seconds_per_case: 67.2
|
||||
total_cost: 0.0000
|
||||
|
||||
- dirname: 2024-09-29-17-51-11--codegeex4-whole-2
|
||||
test_cases: 133
|
||||
model: ollama/codegeex4
|
||||
edit_format: whole
|
||||
commit_hash: 228ae24
|
||||
pass_rate_1: 28.6
|
||||
pass_rate_2: 32.3
|
||||
percent_cases_well_formed: 97.0
|
||||
error_outputs: 20
|
||||
num_malformed_responses: 20
|
||||
num_with_malformed_responses: 4
|
||||
user_asks: 56
|
||||
lazy_comments: 5
|
||||
syntax_errors: 5
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 4
|
||||
command: aider --model ollama/codegeex4
|
||||
date: 2024-09-29
|
||||
versions: 0.57.2.dev
|
||||
seconds_per_case: 128.1
|
||||
total_cost: 0.0000
|
||||
|
||||
- dirname: 2024-09-30-00-09-00--wojtek-opencodeinterpreter-6.7b-whole-2
|
||||
test_cases: 133
|
||||
model: ollama/wojtek/opencodeinterpreter:6.7b
|
||||
edit_format: whole
|
||||
commit_hash: 6d586fd
|
||||
pass_rate_1: 26.3
|
||||
pass_rate_2: 30.1
|
||||
percent_cases_well_formed: 91.0
|
||||
error_outputs: 18
|
||||
num_malformed_responses: 18
|
||||
num_with_malformed_responses: 12
|
||||
user_asks: 79
|
||||
lazy_comments: 7
|
||||
syntax_errors: 0
|
||||
indentation_errors: 1
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 6
|
||||
command: aider --model ollama/wojtek/opencodeinterpreter:6.7b
|
||||
date: 2024-09-30
|
||||
versions: 0.58.1.dev
|
||||
seconds_per_case: 59.3
|
||||
total_cost: 0.0000
|
||||
|
||||
- dirname: 2024-09-30-03-49-01--mistral-nemo-12b-instruct-2407-q4_K_M-whole-1
|
||||
test_cases: 133
|
||||
model: ollama/mistral-nemo:12b-instruct-2407-q4_K_M
|
||||
edit_format: whole
|
||||
commit_hash: ba4dec8
|
||||
pass_rate_1: 22.6
|
||||
pass_rate_2: 33.1
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 0
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 53
|
||||
lazy_comments: 37
|
||||
syntax_errors: 2
|
||||
indentation_errors: 2
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 2
|
||||
command: aider --model ollama/mistral-nemo:12b-instruct-2407-q4_K_M
|
||||
date: 2024-09-30
|
||||
versions: 0.58.1.dev
|
||||
seconds_per_case: 34.7
|
||||
total_cost: 0.0000
|
||||
|
||||
- dirname: 2024-09-30-14-09-43--qwen2.5-32b-whole-2
|
||||
test_cases: 133
|
||||
model: ollama/qwen2.5:32b
|
||||
edit_format: whole
|
||||
commit_hash: 765c4cb
|
||||
pass_rate_1: 44.4
|
||||
pass_rate_2: 54.1
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 0
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 9
|
||||
lazy_comments: 0
|
||||
syntax_errors: 0
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 3
|
||||
command: aider --model ollama/qwen2.5:32b
|
||||
date: 2024-09-30
|
||||
versions: 0.58.1.dev
|
||||
seconds_per_case: 134.9
|
||||
total_cost: 0.0000
|
||||
|
||||
- dirname: 2024-09-30-19-35-40--llama3.2-3b-instruct-fp16-whole-1
|
||||
test_cases: 133
|
||||
model: ollama/llama3.2:3b-instruct-fp16
|
||||
edit_format: whole
|
||||
commit_hash: 3f12290
|
||||
pass_rate_1: 20.3
|
||||
pass_rate_2: 26.3
|
||||
percent_cases_well_formed: 97.0
|
||||
error_outputs: 21
|
||||
num_malformed_responses: 21
|
||||
num_with_malformed_responses: 4
|
||||
user_asks: 73
|
||||
lazy_comments: 11
|
||||
syntax_errors: 1
|
||||
indentation_errors: 3
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 1
|
||||
command: aider --model ollama/llama3.2:3b-instruct-fp16
|
||||
date: 2024-09-30
|
||||
versions: 0.58.1.dev
|
||||
seconds_per_case: 66.6
|
||||
total_cost: 0.0000
|
||||
|
||||
- dirname: 2024-09-30-23-01-24--hermes3-8b-llama3.1-fp16-whole-2
|
||||
test_cases: 133
|
||||
model: ollama/hermes3:8b-llama3.1-fp16
|
||||
edit_format: whole
|
||||
commit_hash: c5ba4f7
|
||||
pass_rate_1: 24.1
|
||||
pass_rate_2: 30.1
|
||||
percent_cases_well_formed: 98.5
|
||||
syntax_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
command: aider --model ollama/hermes3:8b-llama3.1-fp16
|
||||
date: 2024-09-30
|
||||
versions: 0.58.1.dev
|
||||
seconds_per_case: 64.7
|
||||
total_cost: 0.0000
|
||||
|
||||
- dirname: 2024-10-01-02-33-11--mistral-small-whole-1
|
||||
test_cases: 133
|
||||
model: ollama/mistral-small
|
||||
edit_format: whole
|
||||
commit_hash: 8a908fa
|
||||
pass_rate_1: 30.1
|
||||
pass_rate_2: 38.3
|
||||
percent_cases_well_formed: 99.2
|
||||
lazy_comments: 0
|
||||
syntax_errors: 0
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
command: aider --model ollama/mistral-small
|
||||
date: 2024-10-01
|
||||
versions: 0.58.1.dev
|
||||
seconds_per_case: 84.6
|
||||
total_cost: 0.0000
|
||||
|
||||
- dirname: 2024-10-01-07-05-40--yi-coder-9b-chat-fp16-whole-1
|
||||
test_cases: 133
|
||||
model: ollama/yi-coder:9b-chat-fp16
|
||||
edit_format: whole
|
||||
commit_hash: 52c6632-dirty
|
||||
pass_rate_1: 39.8
|
||||
pass_rate_2: 43.6
|
||||
percent_cases_well_formed: 99.2
|
||||
lazy_comments: 0
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
command: aider --model ollama/yi-coder:9b-chat-fp16
|
||||
date: 2024-10-01
|
||||
versions: 0.58.1.dev
|
||||
seconds_per_case: 63.7
|
||||
total_cost: 0.0000
|
||||
|
||||
- dirname: 2024-10-01-16-50-09--hermes3-whole-4
|
||||
test_cases: 133
|
||||
model: ollama/hermes3
|
||||
edit_format: whole
|
||||
commit_hash: 415e898
|
||||
pass_rate_1: 21.1
|
||||
pass_rate_2: 22.6
|
||||
percent_cases_well_formed: 98.5
|
||||
exhausted_context_windows: 0
|
||||
command: aider --model ollama/hermes3
|
||||
date: 2024-10-01
|
||||
versions: 0.58.1.dev
|
||||
seconds_per_case: 24.8
|
||||
total_cost: 0.0000
|
||||
|
||||
- dirname: 2024-10-04-16-30-08--chatgpt-4o-latest-diff-oct4
|
||||
test_cases: 133
|
||||
model: openai/chatgpt-4o-latest
|
||||
edit_format: diff
|
||||
commit_hash: af10953
|
||||
pass_rate_1: 56.4
|
||||
pass_rate_2: 72.2
|
||||
percent_cases_well_formed: 97.0
|
||||
error_outputs: 4
|
||||
num_malformed_responses: 4
|
||||
num_with_malformed_responses: 4
|
||||
user_asks: 21
|
||||
lazy_comments: 0
|
||||
syntax_errors: 0
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 1
|
||||
command: aider --model openai/chatgpt-4o-latest
|
||||
date: 2024-10-04
|
||||
versions: 0.58.2.dev
|
||||
seconds_per_case: 23.7
|
||||
total_cost: 4.0641
|
||||
|
||||
- dirname: 2024-10-05-20-03-10--dracarys-glhf-whole
|
||||
test_cases: 133
|
||||
model: Dracarys2-72B-Instruct
|
||||
edit_format: whole
|
||||
commit_hash: 04a2cbb
|
||||
pass_rate_1: 55.6
|
||||
pass_rate_2: 66.9
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 0
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 1
|
||||
lazy_comments: 0
|
||||
syntax_errors: 0
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 0
|
||||
command: (via glhf.chat)
|
||||
date: 2024-10-05
|
||||
versions: 0.59.2.dev
|
||||
seconds_per_case: 46.7
|
||||
total_cost: 0.0000
|
||||
|
||||
- dirname: 2024-10-13-21-33-42--grok2-whole
|
||||
test_cases: 133
|
||||
model: Grok-2
|
||||
edit_format: whole
|
||||
commit_hash: 0a497b7
|
||||
pass_rate_1: 45.9
|
||||
pass_rate_2: 58.6
|
||||
percent_cases_well_formed: 98.5
|
||||
error_outputs: 7
|
||||
num_malformed_responses: 7
|
||||
num_with_malformed_responses: 2
|
||||
user_asks: 24
|
||||
lazy_comments: 4
|
||||
syntax_errors: 0
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 1
|
||||
command: aider --model openrouter/x-ai/grok-2
|
||||
date: 2024-10-13
|
||||
versions: 0.59.2.dev
|
||||
seconds_per_case: 34.6
|
||||
total_cost: 0.0000
|
||||
|
||||
- dirname: 2024-10-13-23-58-44--grok2mini-whole
|
||||
test_cases: 133
|
||||
model: Grok-2-mini
|
||||
edit_format: whole
|
||||
commit_hash: 0a497b7-dirty, 0a497b7
|
||||
pass_rate_1: 40.6
|
||||
pass_rate_2: 54.9
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 0
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 8
|
||||
lazy_comments: 2
|
||||
syntax_errors: 2
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 0
|
||||
command: aider --model openrouter/x-ai/grok-2-mini
|
||||
date: 2024-10-13
|
||||
versions: 0.59.2.dev
|
||||
seconds_per_case: 32.1
|
||||
total_cost: 0.0000
|
||||
|
||||
- dirname: 2024-10-16-15-55-37--nemotron-glhf-whole3
|
||||
test_cases: 133
|
||||
model: Llama-3.1-Nemotron-70B-Instruct-HF
|
||||
edit_format: whole
|
||||
commit_hash: 6bb9b25-dirty
|
||||
pass_rate_1: 36.8
|
||||
pass_rate_2: 54.9
|
||||
percent_cases_well_formed: 99.2
|
||||
error_outputs: 17
|
||||
num_malformed_responses: 1
|
||||
num_with_malformed_responses: 1
|
||||
user_asks: 53
|
||||
lazy_comments: 17
|
||||
syntax_errors: 1
|
||||
indentation_errors: 2
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 3
|
||||
command: (via glhf.chat)
|
||||
date: 2024-10-16
|
||||
versions: 0.59.2.dev
|
||||
seconds_per_case: 64.9
|
||||
total_cost: 0.0000
|
||||
|
||||
- dirname: 2024-10-22-17-45-28--sonnet-1022-diff-fixed-model-settings
|
||||
test_cases: 133
|
||||
model: claude-3-5-sonnet-20241022
|
||||
edit_format: diff
|
||||
commit_hash: 3b14eb9
|
||||
pass_rate_1: 69.2
|
||||
pass_rate_2: 84.2
|
||||
percent_cases_well_formed: 99.2
|
||||
error_outputs: 1
|
||||
num_malformed_responses: 1
|
||||
num_with_malformed_responses: 1
|
||||
user_asks: 0
|
||||
lazy_comments: 1
|
||||
syntax_errors: 0
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 0
|
||||
command: aider --model anthropic/claude-3-5-sonnet-20241022
|
||||
date: 2024-10-22
|
||||
versions: 0.59.2.dev
|
||||
seconds_per_case: 18.6
|
||||
total_cost: 0.0000
|
||||
|
||||
- dirname: 2024-11-04-19-19-32--haiku35-diff-ex-as-sys-false
|
||||
test_cases: 133
|
||||
model: claude-3-5-haiku-20241022
|
||||
edit_format: diff
|
||||
commit_hash: 03bbdb0-dirty
|
||||
pass_rate_1: 61.7
|
||||
pass_rate_2: 75.2
|
||||
percent_cases_well_formed: 95.5
|
||||
error_outputs: 11
|
||||
num_malformed_responses: 11
|
||||
num_with_malformed_responses: 6
|
||||
user_asks: 1
|
||||
lazy_comments: 1
|
||||
syntax_errors: 0
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 2
|
||||
command: aider --model anthropic/claude-3-5-haiku-20241022
|
||||
date: 2024-11-04
|
||||
versions: 0.61.1.dev
|
||||
seconds_per_case: 18.4
|
||||
total_cost: 0.0000
|
||||
|
||||
- dirname: 2024-11-07-06-15-36--Qwen2.5.1-Coder-7B-Instruct-GGUF:Q8_0-32k-whole
|
||||
test_cases: 133
|
||||
model: ollama/Qwen2.5.1-Coder-7B-Instruct-GGUF:Q8_0-32k
|
||||
edit_format: whole
|
||||
commit_hash: e76704e
|
||||
pass_rate_1: 52.6
|
||||
pass_rate_2: 63.9
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 0
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 4
|
||||
lazy_comments: 0
|
||||
syntax_errors: 0
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 1
|
||||
command: aider --model ollama/Qwen2.5.1-Coder-7B-Instruct-GGUF:Q8_0-32k
|
||||
date: 2024-11-07
|
||||
versions: 0.59.2.dev
|
||||
seconds_per_case: 18.2
|
||||
total_cost: 0.0000
|
||||
|
||||
- dirname: 2024-10-29-00-29-09--Qwen2.5-Coder-0.5B-Instruct
|
||||
test_cases: 133
|
||||
model: Qwen2.5-Coder-0.5B-Instruct
|
||||
edit_format: whole
|
||||
commit_hash: 58bd375
|
||||
pass_rate_1: 14.3
|
||||
pass_rate_2: 14.3
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 20
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 45
|
||||
lazy_comments: 0
|
||||
syntax_errors: 2
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 20
|
||||
test_timeouts: 2
|
||||
command: aider --model openai/Qwen2.5-Coder-0.5B-Instruct
|
||||
date: 2024-10-29
|
||||
versions: 0.59.2.dev
|
||||
seconds_per_case: 16.0
|
||||
total_cost: 0.0000
|
||||
|
||||
- dirname: 2024-11-11-19-37-01--Qwen2.5-Coder-1.5B-Instruct
|
||||
test_cases: 133
|
||||
model: Qwen2.5-Coder-1.5B-Instruct
|
||||
edit_format: whole
|
||||
commit_hash: bb5681c
|
||||
pass_rate_1: 28.6
|
||||
pass_rate_2: 31.6
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 5
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 13
|
||||
lazy_comments: 2
|
||||
syntax_errors: 1
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 5
|
||||
test_timeouts: 2
|
||||
command: aider --model openai/Qwen2.5-Coder-1.5B-Instruct
|
||||
date: 2024-11-11
|
||||
versions: 0.59.2.dev
|
||||
seconds_per_case: 27.4
|
||||
total_cost: 0.0000
|
||||
|
||||
- dirname: 2024-11-04-02-25-32--Qwen2.5-Coder-3B-Instruct
|
||||
test_cases: 133
|
||||
model: Qwen2.5-Coder-3B-Instruct
|
||||
edit_format: whole
|
||||
commit_hash: 0ba3647
|
||||
pass_rate_1: 33.8
|
||||
pass_rate_2: 39.1
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 4
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 3
|
||||
lazy_comments: 0
|
||||
syntax_errors: 0
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 4
|
||||
test_timeouts: 6
|
||||
command: aider --model openai/Qwen2.5-Coder-3B-Instruct
|
||||
date: 2024-11-04
|
||||
versions: 0.59.2.dev
|
||||
seconds_per_case: 18.7
|
||||
total_cost: 0.0000
|
||||
|
||||
- dirname: 2024-10-16-16-20-59--Qwen2.5-Coder-7B-Instruct
|
||||
test_cases: 133
|
||||
model: Qwen2.5-Coder-7B-Instruct
|
||||
edit_format: whole
|
||||
commit_hash: 92fe979-dirty
|
||||
pass_rate_1: 51.9
|
||||
pass_rate_2: 57.9
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 2
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 2
|
||||
lazy_comments: 0
|
||||
syntax_errors: 0
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 2
|
||||
test_timeouts: 5
|
||||
command: aider --model openai/Qwen2.5-Coder-7B-Instruct
|
||||
date: 2024-10-16
|
||||
versions: 0.59.2.dev
|
||||
seconds_per_case: 10.5
|
||||
total_cost: 0.0000
|
||||
|
||||
- dirname: 2024-10-29-11-53-39--Qwen2.5-Coder-14B-Instruct
|
||||
test_cases: 133
|
||||
model: Qwen2.5-Coder-14B-Instruct
|
||||
edit_format: whole
|
||||
commit_hash: 58bd375
|
||||
pass_rate_1: 58.6
|
||||
pass_rate_2: 69.2
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 3
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 2
|
||||
lazy_comments: 0
|
||||
syntax_errors: 0
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 3
|
||||
test_timeouts: 0
|
||||
command: aider --model openai/Qwen2.5-Coder-14B-Instruct
|
||||
date: 2024-10-29
|
||||
versions: 0.59.2.dev
|
||||
seconds_per_case: 18.3
|
||||
total_cost: 0.0000
|
||||
|
||||
- dirname: 2024-11-09-10-57-11--Qwen2.5-Coder-32B-Instruct
|
||||
test_cases: 133
|
||||
model: Qwen2.5-Coder-32B-Instruct (whole)
|
||||
edit_format: whole
|
||||
commit_hash: ec9982a
|
||||
pass_rate_1: 60.9
|
||||
pass_rate_2: 73.7
|
||||
percent_cases_well_formed: 100.0
|
||||
error_outputs: 1
|
||||
num_malformed_responses: 0
|
||||
num_with_malformed_responses: 0
|
||||
user_asks: 1
|
||||
lazy_comments: 0
|
||||
syntax_errors: 0
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 1
|
||||
test_timeouts: 1
|
||||
command: aider --model openai/Qwen2.5-Coder-32B-Instruct
|
||||
date: 2024-11-09
|
||||
versions: 0.59.2.dev
|
||||
seconds_per_case: 26.6
|
||||
total_cost: 0.0000
|
||||
|
||||
- dirname: 2024-11-09-11-09-15--Qwen2.5-Coder-32B-Instruct
|
||||
test_cases: 133
|
||||
model: Qwen2.5-Coder-32B-Instruct (diff)
|
||||
edit_format: diff
|
||||
commit_hash: ec9982a
|
||||
pass_rate_1: 59.4
|
||||
pass_rate_2: 71.4
|
||||
percent_cases_well_formed: 94.7
|
||||
error_outputs: 17
|
||||
num_malformed_responses: 17
|
||||
num_with_malformed_responses: 7
|
||||
user_asks: 1
|
||||
lazy_comments: 0
|
||||
syntax_errors: 0
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 3
|
||||
command: aider --model openai/Qwen2.5-Coder-32B-Instruct
|
||||
date: 2024-11-09
|
||||
versions: 0.59.2.dev
|
||||
seconds_per_case: 22.5
|
||||
total_cost: 0.0000
|
||||
|
||||
@@ -145,7 +145,7 @@
|
||||
|
||||
- dirname: 2024-07-01-18-30-33--refac-claude-3.5-sonnet-diff-not-lazy
|
||||
test_cases: 89
|
||||
model: claude-3.5-sonnet (diff)
|
||||
model: claude-3.5-sonnet-20240620
|
||||
edit_format: diff
|
||||
commit_hash: 7396e38-dirty
|
||||
pass_rate_1: 64.0
|
||||
@@ -229,4 +229,70 @@
|
||||
date: 2024-09-05
|
||||
versions: 0.55.1.dev
|
||||
seconds_per_case: 225.4
|
||||
total_cost: 1.0338
|
||||
total_cost: 1.0338
|
||||
|
||||
- dirname: 2024-10-22-19-57-27--refac-openrouter-sonnet-1022
|
||||
test_cases: 89
|
||||
model: claude-3-5-sonnet-20241022
|
||||
edit_format: diff
|
||||
commit_hash: 4a3e6ef
|
||||
pass_rate_1: 92.1
|
||||
percent_cases_well_formed: 91.0
|
||||
error_outputs: 13
|
||||
num_malformed_responses: 12
|
||||
num_with_malformed_responses: 8
|
||||
user_asks: 14
|
||||
lazy_comments: 2
|
||||
syntax_errors: 0
|
||||
indentation_errors: 0
|
||||
exhausted_context_windows: 0
|
||||
test_timeouts: 0
|
||||
command: aider --sonnet
|
||||
date: 2024-10-22
|
||||
versions: 0.60.1.dev
|
||||
seconds_per_case: 32.5
|
||||
total_cost: 8.4644
|
||||
|
||||
- dirname: 2024-10-22-20-03-10--refac-o1mini
|
||||
test_cases: 89
|
||||
model: o1-mini
|
||||
edit_format: diff
|
||||
commit_hash: 4a3e6ef-dirty
|
||||
pass_rate_1: 44.9
|
||||
percent_cases_well_formed: 29.2
|
||||
error_outputs: 151
|
||||
num_malformed_responses: 150
|
||||
num_with_malformed_responses: 63
|
||||
user_asks: 28
|
||||
lazy_comments: 2
|
||||
syntax_errors: 5
|
||||
indentation_errors: 4
|
||||
exhausted_context_windows: 1
|
||||
test_timeouts: 0
|
||||
command: aider --model o1-mini
|
||||
date: 2024-10-22
|
||||
versions: 0.60.1.dev
|
||||
seconds_per_case: 115.3
|
||||
total_cost: 29.0492
|
||||
|
||||
- dirname: 2024-10-22-20-26-36--refac-o1preview
|
||||
test_cases: 89
|
||||
model: o1-preview
|
||||
edit_format: diff
|
||||
commit_hash: 4a3e6ef-dirty
|
||||
pass_rate_1: 75.3
|
||||
percent_cases_well_formed: 57.3
|
||||
error_outputs: 75
|
||||
num_malformed_responses: 74
|
||||
num_with_malformed_responses: 38
|
||||
user_asks: 19
|
||||
lazy_comments: 2
|
||||
syntax_errors: 2
|
||||
indentation_errors: 3
|
||||
exhausted_context_windows: 1
|
||||
test_timeouts: 0
|
||||
command: aider --model o1-preview
|
||||
date: 2024-10-22
|
||||
versions: 0.60.1.dev
|
||||
seconds_per_case: 231.7
|
||||
total_cost: 120.9850
|
||||
@@ -2,7 +2,7 @@
|
||||
You can get started quickly like this:
|
||||
|
||||
```
|
||||
python -m pip install aider-chat
|
||||
python -m pip install -U aider-chat
|
||||
|
||||
# Change directory into a git repo
|
||||
cd /to/your/git/repo
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
If you need more help, please check our
|
||||
[GitHub issues](https://github.com/paul-gauthier/aider/issues)
|
||||
[GitHub issues](https://github.com/Aider-AI/aider/issues)
|
||||
and file a new issue if your problem isn't discussed.
|
||||
Or drop into our
|
||||
[Discord](https://discord.gg/Tv2uQnR88V)
|
||||
|
||||
@@ -2,4 +2,4 @@ You can send long, multi-line messages in the chat in a few ways:
|
||||
- Paste a multi-line message directly into the chat.
|
||||
- Enter `{` alone on the first line to start a multiline message and `}` alone on the last line to end it.
|
||||
- Use Meta-ENTER to start a new line without sending the message (Esc+ENTER in some environments).
|
||||
- Use `/clipboard` to paste text from the clipboard into the chat.
|
||||
- Use `/paste` to paste text from the clipboard into the chat.
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
<footer class="site-footer">
|
||||
Aider is AI pair programming in your terminal.
|
||||
Aider is on
|
||||
<a href="https://github.com/paul-gauthier/aider">GitHub</a>
|
||||
<a href="https://github.com/Aider-AI/aider">GitHub</a>
|
||||
and
|
||||
<a href="https://discord.gg/Tv2uQnR88V">Discord</a>.
|
||||
</footer>
|
||||
|
||||
@@ -110,9 +110,9 @@ source code, by including the critical lines of code for each definition.
|
||||
|
||||
Here's a
|
||||
sample of the map of the aider repo, just showing the maps of
|
||||
[base_coder.py](https://github.com/paul-gauthier/aider/blob/main/aider/coders/base_coder.py)
|
||||
[base_coder.py](https://github.com/Aider-AI/aider/blob/main/aider/coders/base_coder.py)
|
||||
and
|
||||
[commands.py](https://github.com/paul-gauthier/aider/blob/main/aider/commands.py)
|
||||
[commands.py](https://github.com/Aider-AI/aider/blob/main/aider/commands.py)
|
||||
:
|
||||
|
||||
```
|
||||
@@ -188,7 +188,7 @@ It specifically uses the
|
||||
[py-tree-sitter-languages](https://github.com/grantjenks/py-tree-sitter-languages)
|
||||
python module,
|
||||
which provides simple, pip-installable binary wheels for
|
||||
[most popular programming languages](https://github.com/paul-gauthier/grep-ast/blob/main/grep_ast/parsers.py).
|
||||
[most popular programming languages](https://github.com/Aider-AI/grep-ast/blob/main/grep_ast/parsers.py).
|
||||
|
||||
Tree-sitter parses source code into an Abstract Syntax Tree (AST) based
|
||||
on the syntax of the programming language.
|
||||
@@ -209,7 +209,7 @@ that aider originally used.
|
||||
Switching from ctags to tree-sitter provides a bunch of benefits:
|
||||
|
||||
- The map is richer, showing full function call signatures and other details straight from the source files.
|
||||
- Thanks to `py-tree-sitter-languages`, we get full support for many programming languages via a python package that's automatically installed as part of the normal `python -m pip install aider-chat`.
|
||||
- Thanks to `py-tree-sitter-languages`, we get full support for many programming languages via a python package that's automatically installed as part of the normal `python -m pip install -U aider-chat`.
|
||||
- We remove the requirement for users to manually install `universal-ctags` via some external tool or package manager (brew, apt, choco, etc).
|
||||
- Tree-sitter integration is a key enabler for future work and capabilities for aider.
|
||||
|
||||
@@ -245,7 +245,7 @@ just install [aider](https://aider.chat/docs/install.html).
|
||||
## Credits
|
||||
|
||||
Aider uses
|
||||
[modified versions of the tags.scm files](https://github.com/paul-gauthier/aider/tree/main/aider/queries)
|
||||
[modified versions of the tags.scm files](https://github.com/Aider-AI/aider/tree/main/aider/queries)
|
||||
from these
|
||||
open source tree-sitter language implementations:
|
||||
|
||||
|
||||
@@ -23,14 +23,14 @@ making it the best available model for pair programming with AI.
|
||||
To use Claude 3 Opus with aider:
|
||||
|
||||
```
|
||||
python -m pip install aider-chat
|
||||
python -m pip install -U aider-chat
|
||||
export ANTHROPIC_API_KEY=sk-...
|
||||
aider --opus
|
||||
```
|
||||
|
||||
## Aider's code editing benchmark
|
||||
|
||||
[Aider](https://github.com/paul-gauthier/aider)
|
||||
[Aider](https://github.com/Aider-AI/aider)
|
||||
is an open source command line chat tool that lets you
|
||||
pair program with AI on code in your local git repo.
|
||||
|
||||
|
||||
@@ -52,7 +52,7 @@ def some_complex_method(foo, bar):
|
||||
# ... implement method here ...
|
||||
```
|
||||
|
||||
Aider uses a ["laziness" benchmark suite](https://github.com/paul-gauthier/refactor-benchmark)
|
||||
Aider uses a ["laziness" benchmark suite](https://github.com/Aider-AI/refactor-benchmark)
|
||||
which is designed to both provoke and quantify lazy coding.
|
||||
It consists of
|
||||
89 python refactoring tasks
|
||||
|
||||
@@ -46,7 +46,7 @@ It also supports [connecting to almost any LLM](https://aider.chat/docs/llms.htm
|
||||
Use the `--browser` switch to launch the browser version of aider:
|
||||
|
||||
```
|
||||
python -m pip install aider-chat
|
||||
python -m pip install -U aider-chat
|
||||
|
||||
export OPENAI_API_KEY=<key> # Mac/Linux
|
||||
setx OPENAI_API_KEY <key> # Windows, restart shell after setx
|
||||
|
||||
@@ -15,7 +15,7 @@ nav_exclude: true
|
||||
I recently wanted to draw a graph showing how LLM code editing skill has been
|
||||
changing over time as new models have been released by OpenAI, Anthropic and others.
|
||||
I have all the
|
||||
[data in a yaml file](https://github.com/paul-gauthier/aider/blob/main/website/_data/edit_leaderboard.yml) that is used to render
|
||||
[data in a yaml file](https://github.com/Aider-AI/aider/blob/main/website/_data/edit_leaderboard.yml) that is used to render
|
||||
[aider's LLM leaderboards](https://aider.chat/docs/leaderboards/).
|
||||
|
||||
Below is the aider chat transcript, which shows:
|
||||
|
||||
@@ -25,7 +25,7 @@ This increases the ability of the LLM to understand the problem and
|
||||
make the correct changes to resolve it.
|
||||
|
||||
Aider ships with basic linters built with tree-sitter that support
|
||||
[most popular programming languages](https://github.com/paul-gauthier/grep-ast/blob/main/grep_ast/parsers.py).
|
||||
[most popular programming languages](https://github.com/Aider-AI/grep-ast/blob/main/grep_ast/parsers.py).
|
||||
These built in linters will detect syntax errors and other fatal problems with the code.
|
||||
|
||||
You can also configure aider to use your preferred linters.
|
||||
|
||||
@@ -76,7 +76,7 @@ The held out "acceptance tests" were *only* used
|
||||
after benchmarking to compute statistics on which problems aider
|
||||
correctly resolved.
|
||||
|
||||
The [full harness to run aider on SWE Bench Lite is available on GitHub](https://github.com/paul-gauthier/aider-swe-bench).
|
||||
The [full harness to run aider on SWE Bench Lite is available on GitHub](https://github.com/Aider-AI/aider-swe-bench).
|
||||
|
||||
The benchmarking process was similar to how a developer might use aider to
|
||||
resolve a GitHub issue:
|
||||
|
||||
@@ -12,8 +12,12 @@ nav_exclude: true
|
||||
|
||||
[](https://aider.chat/assets/self-assembly.jpg)
|
||||
|
||||
{: .note }
|
||||
This article is quite out dated. For current statistics, see
|
||||
[aider's release history](/HISTORY.html).
|
||||
|
||||
The
|
||||
[aider git repo](https://github.com/paul-gauthier/aider)
|
||||
[aider git repo](https://github.com/Aider-AI/aider)
|
||||
currently contains about 4K commits and 14K lines of code.
|
||||
|
||||
Aider made 15% of the commits, inserting 4.8K and deleting 1.5K lines of code.
|
||||
|
||||
@@ -64,7 +64,7 @@ with the problem statement
|
||||
submitted as the opening chat message from "the user".
|
||||
- After that aider ran as normal, except all of aider's
|
||||
suggestions were always accepted without user approval.
|
||||
- A [simple harness](https://github.com/paul-gauthier/aider-swe-bench#the-aider-agent) was used to retry the SWE Bench problem if aider produced code that wasn't *plausibly correct*.
|
||||
- A [simple harness](https://github.com/Aider-AI/aider-swe-bench#the-aider-agent) was used to retry the SWE Bench problem if aider produced code that wasn't *plausibly correct*.
|
||||
Plausibly correct means that aider reported that it had successfully edited the repo
|
||||
without causing syntax errors or breaking any *pre-existing* tests.
|
||||
- If the solution from aider with GPT-4o wasn't plausible, the harness launched aider to try again from scratch using Claude 3 Opus.
|
||||
@@ -90,7 +90,7 @@ For a detailed discussion of the benchmark
|
||||
methodology, see the
|
||||
[article about aider's SWE Bench Lite results](https://aider.chat/2024/05/22/swe-bench-lite.html).
|
||||
Also, the
|
||||
[aider SWE Bench repository on GitHub](https://github.com/paul-gauthier/aider-swe-bench)
|
||||
[aider SWE Bench repository on GitHub](https://github.com/Aider-AI/aider-swe-bench)
|
||||
contains the harness and statistics code used for the benchmarks.
|
||||
|
||||
The benchmarking process was similar to how a developer might use aider to
|
||||
|
||||
@@ -37,8 +37,8 @@ Users who tested Sonnet with a preview of
|
||||
[aider's latest release](https://aider.chat/HISTORY.html#aider-v0410)
|
||||
were thrilled:
|
||||
|
||||
- *Works like a charm. It is a monster. It refactors files of any size like it is nothing. The continue trick with Sonnet is truly the holy grail. Aider beats [other tools] hands down. I'm going to cancel both subscriptions.* -- [Emasoft](https://github.com/paul-gauthier/aider/issues/705#issuecomment-2200338971)
|
||||
- *Thanks heaps for this feature - it's a real game changer. I can be more ambitious when asking Claude for larger features.* -- [cngarrison](https://github.com/paul-gauthier/aider/issues/705#issuecomment-2196026656)
|
||||
- *Works like a charm. It is a monster. It refactors files of any size like it is nothing. The continue trick with Sonnet is truly the holy grail. Aider beats [other tools] hands down. I'm going to cancel both subscriptions.* -- [Emasoft](https://github.com/Aider-AI/aider/issues/705#issuecomment-2200338971)
|
||||
- *Thanks heaps for this feature - it's a real game changer. I can be more ambitious when asking Claude for larger features.* -- [cngarrison](https://github.com/Aider-AI/aider/issues/705#issuecomment-2196026656)
|
||||
- *Fantastic...! It's such an improvement not being constrained by output token length issues. [I refactored] a single JavaScript file into seven smaller files using a single Aider request.* -- [John Galt](https://discord.com/channels/1131200896827654144/1253492379336441907/1256250487934554143)
|
||||
|
||||
## Hitting the 4k token output limit
|
||||
@@ -116,7 +116,7 @@ for more details, but
|
||||
you can get started quickly with aider and Sonnet like this:
|
||||
|
||||
```
|
||||
$ python -m pip install aider-chat
|
||||
$ python -m pip install -U aider-chat
|
||||
|
||||
$ export ANTHROPIC_API_KEY=<key> # Mac/Linux
|
||||
$ setx ANTHROPIC_API_KEY <key> # Windows, restart shell after setx
|
||||
|
||||
@@ -30,7 +30,7 @@ included for scale.
|
||||
You can code with all of these models using aider like this:
|
||||
|
||||
```
|
||||
$ python -m pip install aider-chat
|
||||
$ python -m pip install -U aider-chat
|
||||
|
||||
# Change directory into a git repo to work on
|
||||
$ cd /to/your/git/repo
|
||||
|
||||
418
aider/website/_posts/2024-09-26-architect.md
Normal file
418
aider/website/_posts/2024-09-26-architect.md
Normal file
@@ -0,0 +1,418 @@
|
||||
---
|
||||
title: Separating code reasoning and editing
|
||||
excerpt: An Architect model describes how to solve the coding problem, and an Editor model translates that into file edits. This Architect/Editor approach produces SOTA benchmark results.
|
||||
highlight_image: /assets/architect.jpg
|
||||
draft: false
|
||||
nav_exclude: true
|
||||
---
|
||||
{% if page.date %}
|
||||
<p class="post-date">{{ page.date | date: "%B %d, %Y" }}</p>
|
||||
{% endif %}
|
||||
|
||||
# Separating code reasoning and editing
|
||||
|
||||
Aider now has experimental support for using two models to complete each coding task:
|
||||
|
||||
- An Architect model is asked to describe how to solve the coding problem.
|
||||
- An Editor model is given the Architect's solution and asked to produce specific code editing instructions to apply those changes to existing source files.
|
||||
|
||||
Splitting up "code reasoning" and "code editing" in this manner
|
||||
has produced SOTA results on
|
||||
[aider's code editing benchmark](/docs/benchmarks.html#the-benchmark).
|
||||
Using o1-preview as the Architect with either DeepSeek or o1-mini as the
|
||||
Editor produced the SOTA score of 85%.
|
||||
Using the Architect/Editor approach
|
||||
also significantly improved the benchmark scores of many
|
||||
models, compared to their previous "solo" baseline scores (striped bars).
|
||||
|
||||
<style>
|
||||
.shaded td {
|
||||
background-color: #f2f2f2;
|
||||
border-top: 1px solid #ccc;
|
||||
}
|
||||
.table-container {
|
||||
max-width: 100%;
|
||||
overflow-x: auto;
|
||||
}
|
||||
.responsive-table {
|
||||
border-collapse: separate;
|
||||
border-spacing: 0;
|
||||
width: 100%;
|
||||
font-size: 16px;
|
||||
border: 1px solid #ddd;
|
||||
}
|
||||
.responsive-table th, .responsive-table td {
|
||||
padding: 8px;
|
||||
text-align: left;
|
||||
border-bottom: 1px solid #ddd;
|
||||
word-break: break-word;
|
||||
}
|
||||
.responsive-table th {
|
||||
background-color: #e2e2e2;
|
||||
}
|
||||
.responsive-table th:first-child,
|
||||
.responsive-table td:first-child {
|
||||
border-left: 1px solid #ddd;
|
||||
}
|
||||
.responsive-table th:last-child,
|
||||
.responsive-table td:last-child {
|
||||
border-right: 1px solid #ddd;
|
||||
}
|
||||
|
||||
@media screen and (max-width: 600px) {
|
||||
.responsive-table {
|
||||
font-size: 12px;
|
||||
}
|
||||
.responsive-table th, .responsive-table td {
|
||||
padding: 4px;
|
||||
}
|
||||
}
|
||||
</style>
|
||||
|
||||
<style>
|
||||
#passRateChart {
|
||||
max-width: 100%;
|
||||
height: auto !important;
|
||||
}
|
||||
</style>
|
||||
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
|
||||
<script src="https://cdn.jsdelivr.net/npm/chartjs-plugin-annotation@1.0.2"></script>
|
||||
{% assign sorted_data = site.data.architect | sort: "pass_rate_2" | reverse %}
|
||||
<canvas id="passRateChart" width="400" height="250"></canvas>
|
||||
<script>
|
||||
document.addEventListener("DOMContentLoaded", function() {
|
||||
var ctx = document.getElementById('passRateChart').getContext('2d');
|
||||
|
||||
// Function to determine aspect ratio and base font size based on screen width
|
||||
function getChartSettings() {
|
||||
if (window.innerWidth < 600) {
|
||||
return { aspectRatio: 1, baseFontSize: 8 }; // Slightly taller for small screens
|
||||
} else if (window.innerWidth < 800) {
|
||||
return { aspectRatio: 1.2, baseFontSize: 10 }; // Slightly taller for small screens
|
||||
} else {
|
||||
return { aspectRatio: 1.4, baseFontSize: 12 }; // Slightly taller for larger screens
|
||||
}
|
||||
}
|
||||
|
||||
var chartSettings = getChartSettings();
|
||||
var baseFontSize = chartSettings.baseFontSize;
|
||||
|
||||
var labels = [];
|
||||
var data = [];
|
||||
var colorMapping = {
|
||||
"claude-3.5-sonnet": "rgba(75, 192, 192, 0.2)",
|
||||
"gpt-4o": "rgba(255, 99, 132, 0.2)",
|
||||
"o1-preview": "rgba(54, 162, 235, 0.2)",
|
||||
"o1-mini": "rgba(255, 206, 86, 0.2)",
|
||||
"gpt-4o-mini": "rgba(153, 102, 255, 0.2)"
|
||||
};
|
||||
var borderColorMapping = {
|
||||
"claude-3.5-sonnet": "rgba(75, 192, 192, 1)",
|
||||
"gpt-4o": "rgba(255, 99, 132, 1)",
|
||||
"o1-preview": "rgba(54, 162, 235, 1)",
|
||||
"o1-mini": "rgba(255, 206, 86, 1)",
|
||||
"gpt-4o-mini": "rgba(153, 102, 255, 1)"
|
||||
};
|
||||
var backgroundColors = [];
|
||||
var borderColors = [];
|
||||
var patterns = {};
|
||||
for (var key in colorMapping) {
|
||||
patterns[key] = ctx.createPattern(createStripePattern(colorMapping[key]), 'repeat');
|
||||
}
|
||||
{% assign grouped_data = sorted_data | group_by: "model" %}
|
||||
{% for group in grouped_data %}
|
||||
{% for item in group.items %}
|
||||
if ("{{ item.editor_model }}" == "") {
|
||||
labels.push("Baseline");
|
||||
} else {
|
||||
labels.push("{{ item.editor_model }}/{{ item.editor_edit_format | default: item.edit_format }}");
|
||||
}
|
||||
data.push({{ item.pass_rate_2 }});
|
||||
if ("{{ item.editor_model }}" == "") {
|
||||
backgroundColors.push(patterns["{{ item.model }}"]);
|
||||
} else {
|
||||
backgroundColors.push(colorMapping["{{ item.model }}"]);
|
||||
}
|
||||
borderColors.push(borderColorMapping["{{ item.model }}"]);
|
||||
{% endfor %}
|
||||
{% endfor %}
|
||||
labels.reverse();
|
||||
data.reverse();
|
||||
backgroundColors.reverse();
|
||||
borderColors.reverse();
|
||||
var chart = new Chart(ctx, {
|
||||
type: 'bar',
|
||||
data: {
|
||||
labels: labels,
|
||||
datasets: [{
|
||||
label: 'Pass Rate',
|
||||
data: data,
|
||||
backgroundColor: backgroundColors,
|
||||
borderColor: borderColors,
|
||||
borderWidth: 1
|
||||
}]
|
||||
},
|
||||
options: {
|
||||
responsive: true,
|
||||
maintainAspectRatio: true,
|
||||
aspectRatio: chartSettings.aspectRatio,
|
||||
scales: {
|
||||
y: {
|
||||
beginAtZero: true,
|
||||
title: {
|
||||
display: true,
|
||||
text: 'Pass Rate (%)',
|
||||
font: {
|
||||
size: baseFontSize + 6
|
||||
}
|
||||
},
|
||||
ticks: {
|
||||
font: {
|
||||
size: baseFontSize
|
||||
}
|
||||
}
|
||||
},
|
||||
x: {
|
||||
title: {
|
||||
display: true,
|
||||
text: 'Editor model and edit format',
|
||||
font: {
|
||||
size: baseFontSize + 6
|
||||
}
|
||||
},
|
||||
ticks: {
|
||||
font: {
|
||||
size: baseFontSize + 4
|
||||
},
|
||||
maxRotation: 90, // Allow full rotation if needed
|
||||
minRotation: 45 // Start rotating at 45 degrees to fit more labels
|
||||
}
|
||||
}
|
||||
},
|
||||
plugins: {
|
||||
annotation: {
|
||||
annotations: {
|
||||
line1: {
|
||||
type: 'line',
|
||||
yMin: 79.7,
|
||||
yMax: 79.7,
|
||||
borderColor: 'rgba(255, 99, 132, 0.8)',
|
||||
borderWidth: 2,
|
||||
borderDash: [6, 6],
|
||||
label: {
|
||||
content: 'Previous SOTA',
|
||||
enabled: true,
|
||||
position: 'start',
|
||||
xAdjust: 10,
|
||||
font: {
|
||||
size: baseFontSize
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
legend: {
|
||||
display: true,
|
||||
title: {
|
||||
display: true,
|
||||
text: 'Architect model',
|
||||
font: {
|
||||
size: baseFontSize + 2,
|
||||
weight: 'bold'
|
||||
}
|
||||
},
|
||||
labels: {
|
||||
font: {
|
||||
size: baseFontSize + 4
|
||||
},
|
||||
generateLabels: function(chart) {
|
||||
var colorMapping = {
|
||||
"o1-preview": "rgba(54, 162, 235, 0.2)",
|
||||
"claude-3.5-sonnet": "rgba(75, 192, 192, 0.2)",
|
||||
"gpt-4o": "rgba(255, 99, 132, 0.2)",
|
||||
"o1-mini": "rgba(255, 206, 86, 0.2)",
|
||||
"gpt-4o-mini": "rgba(153, 102, 255, 0.2)"
|
||||
};
|
||||
return Object.keys(colorMapping).reverse().map(function(key) {
|
||||
return {
|
||||
text: key,
|
||||
fillStyle: colorMapping[key],
|
||||
strokeStyle: colorMapping[key].replace('0.2', '1'),
|
||||
lineWidth: 1
|
||||
};
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// Update aspect ratio and font sizes on window resize
|
||||
window.addEventListener('resize', function() {
|
||||
var newSettings = getChartSettings();
|
||||
chart.options.aspectRatio = newSettings.aspectRatio;
|
||||
baseFontSize = newSettings.baseFontSize;
|
||||
|
||||
// Update font sizes
|
||||
chart.options.scales.y.title.font.size = baseFontSize + 6;
|
||||
chart.options.scales.y.ticks.font.size = baseFontSize;
|
||||
chart.options.scales.x.title.font.size = baseFontSize + 6;
|
||||
chart.options.scales.x.ticks.font.size = baseFontSize + 4;
|
||||
chart.options.plugins.annotation.annotations.line1.label.font.size = baseFontSize;
|
||||
chart.options.plugins.legend.title.font.size = baseFontSize + 4;
|
||||
chart.options.plugins.legend.labels.font.size = baseFontSize + 4;
|
||||
|
||||
chart.update();
|
||||
});
|
||||
});
|
||||
|
||||
function createStripePattern(baseColor) {
|
||||
var canvas = document.createElement('canvas');
|
||||
canvas.width = 10;
|
||||
canvas.height = 10;
|
||||
var ctx = canvas.getContext('2d');
|
||||
|
||||
ctx.fillStyle = baseColor;
|
||||
ctx.fillRect(0, 0, canvas.width, canvas.height);
|
||||
ctx.strokeStyle = 'rgba(0, 0, 0, 0.1)';
|
||||
ctx.lineWidth = 2;
|
||||
ctx.beginPath();
|
||||
ctx.moveTo(0, 0);
|
||||
ctx.lineTo(10, 10);
|
||||
ctx.stroke();
|
||||
|
||||
return canvas;
|
||||
}
|
||||
</script>
|
||||
|
||||
## Motivation
|
||||
|
||||
This approach was motivated by the release of OpenAI's o1 models.
|
||||
They are strong at reasoning, but often fail to output properly formatted
|
||||
code editing instructions.
|
||||
It helps to instead let them describe the solution
|
||||
however they prefer and then pass that output to a more traditional LLM.
|
||||
This second Editor LLM can then interpret the solution description and
|
||||
produce the code editing instructions needed to update
|
||||
the existing source code.
|
||||
|
||||
This approach has recently become attractive for aider due to
|
||||
rapid improvements in the speed and costs of frontier models.
|
||||
In particular, chaining older LLMs would have been quite slow and
|
||||
incompatible with aider's goal of providing an interactive,
|
||||
pair programming AI coding experience.
|
||||
|
||||
## Code reasoning and code editing
|
||||
|
||||
Normally aider asks the model to solve a coding problem in one prompt,
|
||||
asking the LLM to explain the solution and return
|
||||
a well formatted series of file edits.
|
||||
All of [aider's editing formats](/docs/more/edit-formats.html)
|
||||
require the LLM to return source code edits in a specific text
|
||||
format, so that aider can process the edits and apply them to the local source files.
|
||||
|
||||
Because this all happens in a single prompt/response round trip to the LLM,
|
||||
the model has to split its attention between
|
||||
solving the coding problem and conforming to the edit format.
|
||||
|
||||
The Architect/Editor approach splits this into two inference steps, possibly
|
||||
using two different LLMs:
|
||||
|
||||
1. Solve the coding problem (Architect).
|
||||
2. Turn the proposed solution into a series of well formed code edits (Editor).
|
||||
|
||||
The Architect/Editor approach allows the Architect to focus on solving the coding problem
|
||||
and *describe the solution however comes naturally to it*.
|
||||
Similarly, the Editor can focus all of its attention on properly formatting the edits
|
||||
without needing to reason much about how to solve the coding problem.
|
||||
|
||||
We can assign the Architect and Editor roles to LLMs which are well suited to their needs.
|
||||
Strong reasoning model like o1-preview make excellent Architects, while
|
||||
the Editor role can be assigned to an appropriate model based on cost, speed
|
||||
and code editing skill.
|
||||
|
||||
## Results
|
||||
|
||||
The graph above and the table below show the
|
||||
[aider's code editing benchmark](/docs/benchmarks.html#the-benchmark)
|
||||
score for various combinations of Architect and Editor models.
|
||||
|
||||
|
||||
Some noteworthy observations:
|
||||
|
||||
- Pairing o1-preview as Architect with either Deepseek or o1-mini as Editor sets a SOTA significantly above the previous best score. This result is obtained with the "whole" editing format, requiring the Editor to output a full update copy of each edited source file. Both of these steps are therefore quite slow, so probably not practical for interactive use with aider.
|
||||
- Pairing OpenAI's o1-preview with Anthropic's Sonnet as the Editor produces the second best result. This is an entirely practical configuration for users able to work with both providers.
|
||||
- Pairing many models with themselves in the Architect/Editor configuration can provide
|
||||
significant benefits.
|
||||
Sonnet, GPT-4o and GPT-4o-mini all scored higher when used as an Architect/Editor pair.
|
||||
- Deepseek is surprisingly effective as an Editor model. It seems remarkably capable at turning proposed coding solutions into new, updated versions of the source files. Using the efficient "diff" editing format, Deepseek helps all the Architect models except for Sonnet.
|
||||
|
||||
## Try it!
|
||||
|
||||
The development version of aider
|
||||
has built in defaults to support Architect/Editor coding with
|
||||
o1-preview, o1-mini, GPT-4o and Claude 3.5 Sonnet.
|
||||
Run aider with `--architect` or get started quickly like this:
|
||||
|
||||
```
|
||||
pip install -U aider-chat
|
||||
|
||||
# Change directory into a git repo
|
||||
cd /to/your/git/repo
|
||||
|
||||
# Work with Claude 3.5 Sonnet as the Architect and Editor
|
||||
export ANTHROPIC_API_KEY=your-key-goes-here
|
||||
aider --sonnet --architect
|
||||
|
||||
# Work with OpenAI models, using gpt-4o as the Editor
|
||||
export OPENAI_API_KEY=your-key-goes-here
|
||||
aider --4o --architect
|
||||
aider --o1-mini --architect
|
||||
aider --o1-preview --architect
|
||||
```
|
||||
|
||||
## More info
|
||||
|
||||
Aider has a number of "chat modes", and "architect" is available as a new chat mode.
|
||||
The `--architect` switch is a shortcut for `--chat-mode architect`.
|
||||
For more details, see documentation on
|
||||
[aider's chat modes](/docs/usage/modes.html).
|
||||
|
||||
|
||||
## Full results
|
||||
|
||||
Below are the benchmark results using various models as the Architect, paired with
|
||||
various models as the Editor.
|
||||
Each section includes a "baseline" result,
|
||||
where the model works
|
||||
by itself in aider's normal "code" editing mode
|
||||
(not as part of an Architect/Editor configuration).
|
||||
This "solo" baseline represents the performance previously available when using
|
||||
this model with aider.
|
||||
|
||||
<div class="table-container">
|
||||
<table class="responsive-table">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Architect</th>
|
||||
<th>Editor</th>
|
||||
<th>Edit Format</th>
|
||||
<th>Pass Rate</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
{% for group in grouped_data %}
|
||||
{% assign group_class = forloop.index | modulo: 2 | plus: 1 %}
|
||||
{% for item in group.items %}
|
||||
<tr class="{% if group_class == 1 %}shaded{% endif %}">
|
||||
<td>{{ item.model }}</td>
|
||||
<td>{% if item.editor_model %}{{ item.editor_model }}{% else %}<b>Baseline</b>{% endif %}</td>
|
||||
<td style="text-align: center;">{{ item.editor_edit_format | default: item.edit_format }}</td>
|
||||
<td style="text-align: right;">{{ item.pass_rate_2 }}%</td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
{% endfor %}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
BIN
aider/website/assets/architect.jpg
Normal file
BIN
aider/website/assets/architect.jpg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 337 KiB |
1000
aider/website/assets/sample-analytics.jsonl
Normal file
1000
aider/website/assets/sample-analytics.jsonl
Normal file
File diff suppressed because it is too large
Load Diff
@@ -29,9 +29,12 @@
|
||||
## Use claude-3-opus-20240229 model for the main chat
|
||||
#opus: false
|
||||
|
||||
## Use claude-3-5-sonnet-20240620 model for the main chat
|
||||
## Use claude-3-5-sonnet-20241022 model for the main chat
|
||||
#sonnet: false
|
||||
|
||||
## Use claude-3-5-haiku-20241022 model for the main chat
|
||||
#haiku: false
|
||||
|
||||
## Use gpt-4-0613 model for the main chat
|
||||
#4: false
|
||||
|
||||
@@ -50,6 +53,12 @@
|
||||
## Use deepseek/deepseek-coder model for the main chat
|
||||
#deepseek: false
|
||||
|
||||
## Use o1-mini model for the main chat
|
||||
#o1-mini: false
|
||||
|
||||
## Use o1-preview model for the main chat
|
||||
#o1-preview: false
|
||||
|
||||
#################
|
||||
# Model Settings:
|
||||
|
||||
@@ -83,17 +92,29 @@
|
||||
## Specify what edit format the LLM should use (default depends on model)
|
||||
#edit-format: xxx
|
||||
|
||||
## Use architect edit format for the main chat
|
||||
#architect: false
|
||||
|
||||
## Specify the model to use for commit messages and chat history summarization (default depends on --model)
|
||||
#weak-model: xxx
|
||||
|
||||
## Specify the model to use for editor tasks (default depends on --model)
|
||||
#editor-model: xxx
|
||||
|
||||
## Specify the edit format for the editor model (default: depends on editor model)
|
||||
#editor-edit-format: xxx
|
||||
|
||||
## Only work with models that have meta-data available (default: True)
|
||||
#show-model-warnings: true
|
||||
|
||||
## Suggested number of tokens to use for repo map, use 0 to disable (default: 1024)
|
||||
#map-tokens: xxx
|
||||
## Soft limit on tokens for chat history, after which summarization begins. If unspecified, defaults to the model's max_chat_history_tokens.
|
||||
#max-chat-history-tokens: xxx
|
||||
|
||||
## Control how often the repo map is refreshed (default: auto)
|
||||
#map-refresh: auto
|
||||
## Specify the .env file to load (default: .env in git root)
|
||||
#env-file: .env
|
||||
|
||||
#################
|
||||
# Cache Settings:
|
||||
|
||||
## Enable caching of prompts (default: False)
|
||||
#cache-prompts: false
|
||||
@@ -101,15 +122,18 @@
|
||||
## Number of times to ping at 5min intervals to keep prompt cache warm (default: 0)
|
||||
#cache-keepalive-pings: false
|
||||
|
||||
###################
|
||||
# Repomap Settings:
|
||||
|
||||
## Suggested number of tokens to use for repo map, use 0 to disable (default: 1024)
|
||||
#map-tokens: xxx
|
||||
|
||||
## Control how often the repo map is refreshed. Options: auto, always, files, manual (default: auto)
|
||||
#map-refresh: auto
|
||||
|
||||
## Multiplier for map tokens when no files are specified (default: 2)
|
||||
#map-multiplier-no-files: true
|
||||
|
||||
## Maximum number of tokens to use for chat history. If not specified, uses the model's max_chat_history_tokens.
|
||||
#max-chat-history-tokens: xxx
|
||||
|
||||
## Specify the .env file to load (default: .env in git root)
|
||||
#env-file: .env
|
||||
|
||||
################
|
||||
# History Files:
|
||||
|
||||
@@ -155,6 +179,18 @@
|
||||
## Set the color for assistant output (default: #0088ff)
|
||||
#assistant-output-color: #0088ff
|
||||
|
||||
## Set the color for the completion menu (default: terminal's default text color)
|
||||
#completion-menu-color: xxx
|
||||
|
||||
## Set the background color for the completion menu (default: terminal's default background color)
|
||||
#completion-menu-bg-color: xxx
|
||||
|
||||
## Set the color for the current item in the completion menu (default: terminal's default background color)
|
||||
#completion-menu-current-color: xxx
|
||||
|
||||
## Set the background color for the current item in the completion menu (default: terminal's default text color)
|
||||
#completion-menu-current-bg-color: xxx
|
||||
|
||||
## Set the markdown code theme (default: default, other options include monokai, solarized-dark, solarized-light)
|
||||
#code-theme: default
|
||||
|
||||
@@ -203,6 +239,9 @@
|
||||
## Perform a dry run without modifying files (default: False)
|
||||
#dry-run: false
|
||||
|
||||
## Skip the sanity check for the git repository (default: False)
|
||||
#skip-sanity-check-repo: false
|
||||
|
||||
########################
|
||||
# Fixing and committing:
|
||||
|
||||
@@ -212,7 +251,10 @@
|
||||
## Specify lint commands to run for different languages, eg: "python: flake8 --select=..." (can be used multiple times)
|
||||
#lint-cmd: xxx
|
||||
## Specify multiple values like this:
|
||||
#lint-cmd: [xxx,yyyy,zzz]
|
||||
#lint-cmd:
|
||||
# - xxx
|
||||
# - yyy
|
||||
# - zzz
|
||||
|
||||
## Enable/disable automatic linting after changes (default: True)
|
||||
#auto-lint: true
|
||||
@@ -226,25 +268,40 @@
|
||||
## Run tests and fix problems found
|
||||
#test: false
|
||||
|
||||
############
|
||||
# Analytics:
|
||||
|
||||
## Enable/disable analytics for one session (default: False)
|
||||
#analytics: false
|
||||
|
||||
## Specify a file to log analytics events
|
||||
#analytics-log: xxx
|
||||
|
||||
## Permanently disable analytics
|
||||
#analytics-disable: false
|
||||
|
||||
#################
|
||||
# Other Settings:
|
||||
|
||||
## specify a file to edit (can be used multiple times)
|
||||
#file: xxx
|
||||
## Specify multiple values like this:
|
||||
#file: [xxx,yyyy,zzz]
|
||||
#file:
|
||||
# - xxx
|
||||
# - yyy
|
||||
# - zzz
|
||||
|
||||
## specify a read-only file (can be used multiple times)
|
||||
#read: xxx
|
||||
## Specify multiple values like this:
|
||||
#read: [xxx,yyyy,zzz]
|
||||
#read:
|
||||
# - xxx
|
||||
# - yyy
|
||||
# - zzz
|
||||
|
||||
## Use VI editing mode in the terminal (default: False)
|
||||
#vim: false
|
||||
|
||||
## Specify the language for voice using ISO 639-1 code (default: auto)
|
||||
#voice-language: en
|
||||
|
||||
## Specify the language to use in the chat (default: None, uses system settings)
|
||||
#chat-language: xxx
|
||||
|
||||
@@ -266,8 +323,11 @@
|
||||
## Apply the changes from the given file instead of running the chat (debug)
|
||||
#apply: xxx
|
||||
|
||||
## Apply clipboard contents as edits using the main model's editor format
|
||||
#apply-clipboard-edits: false
|
||||
|
||||
## Always say yes to every confirmation
|
||||
#yes: false
|
||||
#yes-always: false
|
||||
|
||||
## Enable verbose output
|
||||
#verbose: false
|
||||
@@ -287,14 +347,29 @@
|
||||
## Specify a file containing the message to send the LLM, process reply, then exit (disables chat mode)
|
||||
#message-file: xxx
|
||||
|
||||
## Load and execute /commands from a file on launch
|
||||
#load: xxx
|
||||
|
||||
## Specify the encoding for input and output (default: utf-8)
|
||||
#encoding: utf-8
|
||||
|
||||
## Specify the config file (default: search for .aider.conf.yml in git root, cwd or home directory)
|
||||
#config: xxx
|
||||
|
||||
## Run aider in your browser
|
||||
## Run aider in your browser (default: False)
|
||||
#gui: false
|
||||
|
||||
## Enable/disable suggesting shell commands (default: True)
|
||||
#suggest-shell-commands: true
|
||||
|
||||
## Enable/disable fancy input with history and completion (default: True)
|
||||
#fancy-input: true
|
||||
|
||||
#################
|
||||
# Voice Settings:
|
||||
|
||||
## Audio format for voice recording (default: wav). webm and mp3 require ffmpeg
|
||||
#voice-format: wav
|
||||
|
||||
## Specify the language for voice using ISO 639-1 code (default: auto)
|
||||
#voice-language: en
|
||||
|
||||
@@ -33,9 +33,12 @@
|
||||
## Use claude-3-opus-20240229 model for the main chat
|
||||
#AIDER_OPUS=
|
||||
|
||||
## Use claude-3-5-sonnet-20240620 model for the main chat
|
||||
## Use claude-3-5-sonnet-20241022 model for the main chat
|
||||
#AIDER_SONNET=
|
||||
|
||||
## Use claude-3-5-haiku-20241022 model for the main chat
|
||||
#AIDER_HAIKU=
|
||||
|
||||
## Use gpt-4-0613 model for the main chat
|
||||
#AIDER_4=
|
||||
|
||||
@@ -54,6 +57,12 @@
|
||||
## Use deepseek/deepseek-coder model for the main chat
|
||||
#AIDER_DEEPSEEK=
|
||||
|
||||
## Use o1-mini model for the main chat
|
||||
#AIDER_O1_MINI=
|
||||
|
||||
## Use o1-preview model for the main chat
|
||||
#AIDER_O1_PREVIEW=
|
||||
|
||||
#################
|
||||
# Model Settings:
|
||||
|
||||
@@ -87,17 +96,29 @@
|
||||
## Specify what edit format the LLM should use (default depends on model)
|
||||
#AIDER_EDIT_FORMAT=
|
||||
|
||||
## Use architect edit format for the main chat
|
||||
#AIDER_ARCHITECT=
|
||||
|
||||
## Specify the model to use for commit messages and chat history summarization (default depends on --model)
|
||||
#AIDER_WEAK_MODEL=
|
||||
|
||||
## Specify the model to use for editor tasks (default depends on --model)
|
||||
#AIDER_EDITOR_MODEL=
|
||||
|
||||
## Specify the edit format for the editor model (default: depends on editor model)
|
||||
#AIDER_EDITOR_EDIT_FORMAT=
|
||||
|
||||
## Only work with models that have meta-data available (default: True)
|
||||
#AIDER_SHOW_MODEL_WARNINGS=true
|
||||
|
||||
## Suggested number of tokens to use for repo map, use 0 to disable (default: 1024)
|
||||
#AIDER_MAP_TOKENS=
|
||||
## Soft limit on tokens for chat history, after which summarization begins. If unspecified, defaults to the model's max_chat_history_tokens.
|
||||
#AIDER_MAX_CHAT_HISTORY_TOKENS=
|
||||
|
||||
## Control how often the repo map is refreshed (default: auto)
|
||||
#AIDER_MAP_REFRESH=auto
|
||||
## Specify the .env file to load (default: .env in git root)
|
||||
#AIDER_ENV_FILE=.env
|
||||
|
||||
#################
|
||||
# Cache Settings:
|
||||
|
||||
## Enable caching of prompts (default: False)
|
||||
#AIDER_CACHE_PROMPTS=false
|
||||
@@ -105,15 +126,18 @@
|
||||
## Number of times to ping at 5min intervals to keep prompt cache warm (default: 0)
|
||||
#AIDER_CACHE_KEEPALIVE_PINGS=false
|
||||
|
||||
###################
|
||||
# Repomap Settings:
|
||||
|
||||
## Suggested number of tokens to use for repo map, use 0 to disable (default: 1024)
|
||||
#AIDER_MAP_TOKENS=
|
||||
|
||||
## Control how often the repo map is refreshed. Options: auto, always, files, manual (default: auto)
|
||||
#AIDER_MAP_REFRESH=auto
|
||||
|
||||
## Multiplier for map tokens when no files are specified (default: 2)
|
||||
#AIDER_MAP_MULTIPLIER_NO_FILES=true
|
||||
|
||||
## Maximum number of tokens to use for chat history. If not specified, uses the model's max_chat_history_tokens.
|
||||
#AIDER_MAX_CHAT_HISTORY_TOKENS=
|
||||
|
||||
## Specify the .env file to load (default: .env in git root)
|
||||
#AIDER_ENV_FILE=.env
|
||||
|
||||
################
|
||||
# History Files:
|
||||
|
||||
@@ -159,6 +183,18 @@
|
||||
## Set the color for assistant output (default: #0088ff)
|
||||
#AIDER_ASSISTANT_OUTPUT_COLOR=#0088ff
|
||||
|
||||
## Set the color for the completion menu (default: terminal's default text color)
|
||||
#AIDER_COMPLETION_MENU_COLOR=
|
||||
|
||||
## Set the background color for the completion menu (default: terminal's default background color)
|
||||
#AIDER_COMPLETION_MENU_BG_COLOR=
|
||||
|
||||
## Set the color for the current item in the completion menu (default: terminal's default background color)
|
||||
#AIDER_COMPLETION_MENU_CURRENT_COLOR=
|
||||
|
||||
## Set the background color for the current item in the completion menu (default: terminal's default text color)
|
||||
#AIDER_COMPLETION_MENU_CURRENT_BG_COLOR=
|
||||
|
||||
## Set the markdown code theme (default: default, other options include monokai, solarized-dark, solarized-light)
|
||||
#AIDER_CODE_THEME=default
|
||||
|
||||
@@ -207,6 +243,9 @@
|
||||
## Perform a dry run without modifying files (default: False)
|
||||
#AIDER_DRY_RUN=false
|
||||
|
||||
## Skip the sanity check for the git repository (default: False)
|
||||
#AIDER_SKIP_SANITY_CHECK_REPO=false
|
||||
|
||||
########################
|
||||
# Fixing and committing:
|
||||
|
||||
@@ -228,6 +267,18 @@
|
||||
## Run tests and fix problems found
|
||||
#AIDER_TEST=false
|
||||
|
||||
############
|
||||
# Analytics:
|
||||
|
||||
## Enable/disable analytics for one session (default: False)
|
||||
#AIDER_ANALYTICS=false
|
||||
|
||||
## Specify a file to log analytics events
|
||||
#AIDER_ANALYTICS_LOG=
|
||||
|
||||
## Permanently disable analytics
|
||||
#AIDER_ANALYTICS_DISABLE=false
|
||||
|
||||
#################
|
||||
# Other Settings:
|
||||
|
||||
@@ -240,9 +291,6 @@
|
||||
## Use VI editing mode in the terminal (default: False)
|
||||
#AIDER_VIM=false
|
||||
|
||||
## Specify the language for voice using ISO 639-1 code (default: auto)
|
||||
#AIDER_VOICE_LANGUAGE=en
|
||||
|
||||
## Specify the language to use in the chat (default: None, uses system settings)
|
||||
#AIDER_CHAT_LANGUAGE=
|
||||
|
||||
@@ -261,8 +309,11 @@
|
||||
## Apply the changes from the given file instead of running the chat (debug)
|
||||
#AIDER_APPLY=
|
||||
|
||||
## Apply clipboard contents as edits using the main model's editor format
|
||||
#AIDER_APPLY_CLIPBOARD_EDITS=false
|
||||
|
||||
## Always say yes to every confirmation
|
||||
#AIDER_YES=
|
||||
#AIDER_YES_ALWAYS=
|
||||
|
||||
## Enable verbose output
|
||||
#AIDER_VERBOSE=false
|
||||
@@ -282,11 +333,26 @@
|
||||
## Specify a file containing the message to send the LLM, process reply, then exit (disables chat mode)
|
||||
#AIDER_MESSAGE_FILE=
|
||||
|
||||
## Load and execute /commands from a file on launch
|
||||
#AIDER_LOAD=
|
||||
|
||||
## Specify the encoding for input and output (default: utf-8)
|
||||
#AIDER_ENCODING=utf-8
|
||||
|
||||
## Run aider in your browser
|
||||
## Run aider in your browser (default: False)
|
||||
#AIDER_GUI=false
|
||||
|
||||
## Enable/disable suggesting shell commands (default: True)
|
||||
#AIDER_SUGGEST_SHELL_COMMANDS=true
|
||||
|
||||
## Enable/disable fancy input with history and completion (default: True)
|
||||
#AIDER_FANCY_INPUT=true
|
||||
|
||||
#################
|
||||
# Voice Settings:
|
||||
|
||||
## Audio format for voice recording (default: wav). webm and mp3 require ffmpeg
|
||||
#AIDER_VOICE_FORMAT=wav
|
||||
|
||||
## Specify the language for voice using ISO 639-1 code (default: auto)
|
||||
#AIDER_VOICE_LANGUAGE=en
|
||||
|
||||
@@ -19,7 +19,7 @@ and there's a lot
|
||||
of interest about their ability to code compared to the previous versions.
|
||||
With that in mind, I've been benchmarking the new models.
|
||||
|
||||
[Aider](https://github.com/paul-gauthier/aider)
|
||||
[Aider](https://github.com/Aider-AI/aider)
|
||||
is an open source command line chat tool that lets you work with GPT to edit
|
||||
code in your local git repo.
|
||||
To do this, aider needs to be able to reliably recognize when GPT wants to edit
|
||||
|
||||
@@ -20,7 +20,7 @@ and there's a lot
|
||||
of interest about their capabilities and performance.
|
||||
With that in mind, I've been benchmarking the new models.
|
||||
|
||||
[Aider](https://github.com/paul-gauthier/aider)
|
||||
[Aider](https://github.com/Aider-AI/aider)
|
||||
is an open source command line chat tool that lets you work with GPT to edit
|
||||
code in your local git repo.
|
||||
Aider relies on a
|
||||
|
||||
@@ -55,7 +55,7 @@ about prompting GPT for complex tasks like coding. It's beneficial to
|
||||
minimize the "cognitive overhead" of formatting the response, allowing
|
||||
GPT to concentrate on the coding task at hand.
|
||||
|
||||
As a thought experiment, imagine a slack conversation with a junior developer where
|
||||
As a thought experiment, imagine a slack conversation with a editor developer where
|
||||
you ask them to write the code to add some new feature to your app.
|
||||
They're going to type the response back to you by hand in the chat.
|
||||
Should they type out the
|
||||
@@ -168,7 +168,7 @@ requests:
|
||||
### whole
|
||||
|
||||
The
|
||||
[whole](https://github.com/paul-gauthier/aider/blob/main/aider/coders/wholefile_prompts.py)
|
||||
[whole](https://github.com/Aider-AI/aider/blob/main/aider/coders/wholefile_prompts.py)
|
||||
format asks GPT to return an updated copy of the entire file, including any changes.
|
||||
The file should be
|
||||
formatted with normal markdown triple-backtick fences, inlined with the rest of its response text.
|
||||
@@ -187,7 +187,7 @@ def main():
|
||||
|
||||
### diff
|
||||
|
||||
The [diff](https://github.com/paul-gauthier/aider/blob/main/aider/coders/editblock_prompts.py)
|
||||
The [diff](https://github.com/Aider-AI/aider/blob/main/aider/coders/editblock_prompts.py)
|
||||
format also asks GPT to return edits as part of the normal response text,
|
||||
in a simple diff format.
|
||||
Each edit is a fenced code block that
|
||||
@@ -209,7 +209,7 @@ demo.py
|
||||
|
||||
### whole-func
|
||||
|
||||
The [whole-func](https://github.com/paul-gauthier/aider/blob/main/aider/coders/wholefile_func_coder.py)
|
||||
The [whole-func](https://github.com/Aider-AI/aider/blob/main/aider/coders/wholefile_func_coder.py)
|
||||
format requests updated copies of whole files to be returned using the function call API.
|
||||
|
||||
|
||||
@@ -227,7 +227,7 @@ format requests updated copies of whole files to be returned using the function
|
||||
### diff-func
|
||||
|
||||
The
|
||||
[diff-func](https://github.com/paul-gauthier/aider/blob/main/aider/coders/editblock_func_coder.py)
|
||||
[diff-func](https://github.com/Aider-AI/aider/blob/main/aider/coders/editblock_func_coder.py)
|
||||
format requests a list of
|
||||
original/updated style edits to be returned using the function call API.
|
||||
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -23,8 +23,16 @@ load whichever is found first.
|
||||
|
||||
## A note on lists
|
||||
|
||||
The syntax for specifying a list of values is not standard yaml.
|
||||
Instead, use this format:
|
||||
Lists of values can be specified either as a bulleted list:
|
||||
|
||||
```
|
||||
read:
|
||||
- CONVENTIONS.md
|
||||
- anotherfile.txt
|
||||
- thirdfile.py
|
||||
```
|
||||
|
||||
Or lists can be specified using commas and square brackets:
|
||||
|
||||
```
|
||||
read: [CONVENTIONS.md, anotherfile.txt, thirdfile.py]
|
||||
@@ -34,7 +42,7 @@ read: [CONVENTIONS.md, anotherfile.txt, thirdfile.py]
|
||||
|
||||
Below is a sample of the YAML config file, which you
|
||||
can also
|
||||
[download from GitHub](https://github.com/paul-gauthier/aider/blob/main/aider/website/assets/sample.aider.conf.yml).
|
||||
[download from GitHub](https://github.com/Aider-AI/aider/blob/main/aider/website/assets/sample.aider.conf.yml).
|
||||
|
||||
<!--[[[cog
|
||||
from aider.args import get_sample_yaml
|
||||
@@ -77,9 +85,12 @@ cog.outl("```")
|
||||
## Use claude-3-opus-20240229 model for the main chat
|
||||
#opus: false
|
||||
|
||||
## Use claude-3-5-sonnet-20240620 model for the main chat
|
||||
## Use claude-3-5-sonnet-20241022 model for the main chat
|
||||
#sonnet: false
|
||||
|
||||
## Use claude-3-5-haiku-20241022 model for the main chat
|
||||
#haiku: false
|
||||
|
||||
## Use gpt-4-0613 model for the main chat
|
||||
#4: false
|
||||
|
||||
@@ -98,6 +109,12 @@ cog.outl("```")
|
||||
## Use deepseek/deepseek-coder model for the main chat
|
||||
#deepseek: false
|
||||
|
||||
## Use o1-mini model for the main chat
|
||||
#o1-mini: false
|
||||
|
||||
## Use o1-preview model for the main chat
|
||||
#o1-preview: false
|
||||
|
||||
#################
|
||||
# Model Settings:
|
||||
|
||||
@@ -131,17 +148,29 @@ cog.outl("```")
|
||||
## Specify what edit format the LLM should use (default depends on model)
|
||||
#edit-format: xxx
|
||||
|
||||
## Use architect edit format for the main chat
|
||||
#architect: false
|
||||
|
||||
## Specify the model to use for commit messages and chat history summarization (default depends on --model)
|
||||
#weak-model: xxx
|
||||
|
||||
## Specify the model to use for editor tasks (default depends on --model)
|
||||
#editor-model: xxx
|
||||
|
||||
## Specify the edit format for the editor model (default: depends on editor model)
|
||||
#editor-edit-format: xxx
|
||||
|
||||
## Only work with models that have meta-data available (default: True)
|
||||
#show-model-warnings: true
|
||||
|
||||
## Suggested number of tokens to use for repo map, use 0 to disable (default: 1024)
|
||||
#map-tokens: xxx
|
||||
## Soft limit on tokens for chat history, after which summarization begins. If unspecified, defaults to the model's max_chat_history_tokens.
|
||||
#max-chat-history-tokens: xxx
|
||||
|
||||
## Control how often the repo map is refreshed (default: auto)
|
||||
#map-refresh: auto
|
||||
## Specify the .env file to load (default: .env in git root)
|
||||
#env-file: .env
|
||||
|
||||
#################
|
||||
# Cache Settings:
|
||||
|
||||
## Enable caching of prompts (default: False)
|
||||
#cache-prompts: false
|
||||
@@ -149,15 +178,18 @@ cog.outl("```")
|
||||
## Number of times to ping at 5min intervals to keep prompt cache warm (default: 0)
|
||||
#cache-keepalive-pings: false
|
||||
|
||||
###################
|
||||
# Repomap Settings:
|
||||
|
||||
## Suggested number of tokens to use for repo map, use 0 to disable (default: 1024)
|
||||
#map-tokens: xxx
|
||||
|
||||
## Control how often the repo map is refreshed. Options: auto, always, files, manual (default: auto)
|
||||
#map-refresh: auto
|
||||
|
||||
## Multiplier for map tokens when no files are specified (default: 2)
|
||||
#map-multiplier-no-files: true
|
||||
|
||||
## Maximum number of tokens to use for chat history. If not specified, uses the model's max_chat_history_tokens.
|
||||
#max-chat-history-tokens: xxx
|
||||
|
||||
## Specify the .env file to load (default: .env in git root)
|
||||
#env-file: .env
|
||||
|
||||
################
|
||||
# History Files:
|
||||
|
||||
@@ -203,6 +235,18 @@ cog.outl("```")
|
||||
## Set the color for assistant output (default: #0088ff)
|
||||
#assistant-output-color: #0088ff
|
||||
|
||||
## Set the color for the completion menu (default: terminal's default text color)
|
||||
#completion-menu-color: xxx
|
||||
|
||||
## Set the background color for the completion menu (default: terminal's default background color)
|
||||
#completion-menu-bg-color: xxx
|
||||
|
||||
## Set the color for the current item in the completion menu (default: terminal's default background color)
|
||||
#completion-menu-current-color: xxx
|
||||
|
||||
## Set the background color for the current item in the completion menu (default: terminal's default text color)
|
||||
#completion-menu-current-bg-color: xxx
|
||||
|
||||
## Set the markdown code theme (default: default, other options include monokai, solarized-dark, solarized-light)
|
||||
#code-theme: default
|
||||
|
||||
@@ -251,6 +295,9 @@ cog.outl("```")
|
||||
## Perform a dry run without modifying files (default: False)
|
||||
#dry-run: false
|
||||
|
||||
## Skip the sanity check for the git repository (default: False)
|
||||
#skip-sanity-check-repo: false
|
||||
|
||||
########################
|
||||
# Fixing and committing:
|
||||
|
||||
@@ -260,7 +307,10 @@ cog.outl("```")
|
||||
## Specify lint commands to run for different languages, eg: "python: flake8 --select=..." (can be used multiple times)
|
||||
#lint-cmd: xxx
|
||||
## Specify multiple values like this:
|
||||
#lint-cmd: [xxx,yyyy,zzz]
|
||||
#lint-cmd:
|
||||
# - xxx
|
||||
# - yyy
|
||||
# - zzz
|
||||
|
||||
## Enable/disable automatic linting after changes (default: True)
|
||||
#auto-lint: true
|
||||
@@ -274,25 +324,40 @@ cog.outl("```")
|
||||
## Run tests and fix problems found
|
||||
#test: false
|
||||
|
||||
############
|
||||
# Analytics:
|
||||
|
||||
## Enable/disable analytics for one session (default: False)
|
||||
#analytics: false
|
||||
|
||||
## Specify a file to log analytics events
|
||||
#analytics-log: xxx
|
||||
|
||||
## Permanently disable analytics
|
||||
#analytics-disable: false
|
||||
|
||||
#################
|
||||
# Other Settings:
|
||||
|
||||
## specify a file to edit (can be used multiple times)
|
||||
#file: xxx
|
||||
## Specify multiple values like this:
|
||||
#file: [xxx,yyyy,zzz]
|
||||
#file:
|
||||
# - xxx
|
||||
# - yyy
|
||||
# - zzz
|
||||
|
||||
## specify a read-only file (can be used multiple times)
|
||||
#read: xxx
|
||||
## Specify multiple values like this:
|
||||
#read: [xxx,yyyy,zzz]
|
||||
#read:
|
||||
# - xxx
|
||||
# - yyy
|
||||
# - zzz
|
||||
|
||||
## Use VI editing mode in the terminal (default: False)
|
||||
#vim: false
|
||||
|
||||
## Specify the language for voice using ISO 639-1 code (default: auto)
|
||||
#voice-language: en
|
||||
|
||||
## Specify the language to use in the chat (default: None, uses system settings)
|
||||
#chat-language: xxx
|
||||
|
||||
@@ -314,8 +379,11 @@ cog.outl("```")
|
||||
## Apply the changes from the given file instead of running the chat (debug)
|
||||
#apply: xxx
|
||||
|
||||
## Apply clipboard contents as edits using the main model's editor format
|
||||
#apply-clipboard-edits: false
|
||||
|
||||
## Always say yes to every confirmation
|
||||
#yes: false
|
||||
#yes-always: false
|
||||
|
||||
## Enable verbose output
|
||||
#verbose: false
|
||||
@@ -335,16 +403,31 @@ cog.outl("```")
|
||||
## Specify a file containing the message to send the LLM, process reply, then exit (disables chat mode)
|
||||
#message-file: xxx
|
||||
|
||||
## Load and execute /commands from a file on launch
|
||||
#load: xxx
|
||||
|
||||
## Specify the encoding for input and output (default: utf-8)
|
||||
#encoding: utf-8
|
||||
|
||||
## Specify the config file (default: search for .aider.conf.yml in git root, cwd or home directory)
|
||||
#config: xxx
|
||||
|
||||
## Run aider in your browser
|
||||
## Run aider in your browser (default: False)
|
||||
#gui: false
|
||||
|
||||
## Enable/disable suggesting shell commands (default: True)
|
||||
#suggest-shell-commands: true
|
||||
|
||||
## Enable/disable fancy input with history and completion (default: True)
|
||||
#fancy-input: true
|
||||
|
||||
#################
|
||||
# Voice Settings:
|
||||
|
||||
## Audio format for voice recording (default: wav). webm and mp3 require ffmpeg
|
||||
#voice-format: wav
|
||||
|
||||
## Specify the language for voice using ISO 639-1 code (default: auto)
|
||||
#voice-language: en
|
||||
```
|
||||
<!--[[[end]]]-->
|
||||
|
||||
@@ -28,7 +28,7 @@ If the files above exist, they will be loaded in that order. Files loaded last w
|
||||
|
||||
Below is a sample `.env` file, which you
|
||||
can also
|
||||
[download from GitHub](https://github.com/paul-gauthier/aider/blob/main/aider/website/assets/sample.env).
|
||||
[download from GitHub](https://github.com/Aider-AI/aider/blob/main/aider/website/assets/sample.env).
|
||||
|
||||
<!--[[[cog
|
||||
from aider.args import get_sample_dotenv
|
||||
@@ -75,9 +75,12 @@ cog.outl("```")
|
||||
## Use claude-3-opus-20240229 model for the main chat
|
||||
#AIDER_OPUS=
|
||||
|
||||
## Use claude-3-5-sonnet-20240620 model for the main chat
|
||||
## Use claude-3-5-sonnet-20241022 model for the main chat
|
||||
#AIDER_SONNET=
|
||||
|
||||
## Use claude-3-5-haiku-20241022 model for the main chat
|
||||
#AIDER_HAIKU=
|
||||
|
||||
## Use gpt-4-0613 model for the main chat
|
||||
#AIDER_4=
|
||||
|
||||
@@ -96,6 +99,12 @@ cog.outl("```")
|
||||
## Use deepseek/deepseek-coder model for the main chat
|
||||
#AIDER_DEEPSEEK=
|
||||
|
||||
## Use o1-mini model for the main chat
|
||||
#AIDER_O1_MINI=
|
||||
|
||||
## Use o1-preview model for the main chat
|
||||
#AIDER_O1_PREVIEW=
|
||||
|
||||
#################
|
||||
# Model Settings:
|
||||
|
||||
@@ -129,17 +138,29 @@ cog.outl("```")
|
||||
## Specify what edit format the LLM should use (default depends on model)
|
||||
#AIDER_EDIT_FORMAT=
|
||||
|
||||
## Use architect edit format for the main chat
|
||||
#AIDER_ARCHITECT=
|
||||
|
||||
## Specify the model to use for commit messages and chat history summarization (default depends on --model)
|
||||
#AIDER_WEAK_MODEL=
|
||||
|
||||
## Specify the model to use for editor tasks (default depends on --model)
|
||||
#AIDER_EDITOR_MODEL=
|
||||
|
||||
## Specify the edit format for the editor model (default: depends on editor model)
|
||||
#AIDER_EDITOR_EDIT_FORMAT=
|
||||
|
||||
## Only work with models that have meta-data available (default: True)
|
||||
#AIDER_SHOW_MODEL_WARNINGS=true
|
||||
|
||||
## Suggested number of tokens to use for repo map, use 0 to disable (default: 1024)
|
||||
#AIDER_MAP_TOKENS=
|
||||
## Soft limit on tokens for chat history, after which summarization begins. If unspecified, defaults to the model's max_chat_history_tokens.
|
||||
#AIDER_MAX_CHAT_HISTORY_TOKENS=
|
||||
|
||||
## Control how often the repo map is refreshed (default: auto)
|
||||
#AIDER_MAP_REFRESH=auto
|
||||
## Specify the .env file to load (default: .env in git root)
|
||||
#AIDER_ENV_FILE=.env
|
||||
|
||||
#################
|
||||
# Cache Settings:
|
||||
|
||||
## Enable caching of prompts (default: False)
|
||||
#AIDER_CACHE_PROMPTS=false
|
||||
@@ -147,15 +168,18 @@ cog.outl("```")
|
||||
## Number of times to ping at 5min intervals to keep prompt cache warm (default: 0)
|
||||
#AIDER_CACHE_KEEPALIVE_PINGS=false
|
||||
|
||||
###################
|
||||
# Repomap Settings:
|
||||
|
||||
## Suggested number of tokens to use for repo map, use 0 to disable (default: 1024)
|
||||
#AIDER_MAP_TOKENS=
|
||||
|
||||
## Control how often the repo map is refreshed. Options: auto, always, files, manual (default: auto)
|
||||
#AIDER_MAP_REFRESH=auto
|
||||
|
||||
## Multiplier for map tokens when no files are specified (default: 2)
|
||||
#AIDER_MAP_MULTIPLIER_NO_FILES=true
|
||||
|
||||
## Maximum number of tokens to use for chat history. If not specified, uses the model's max_chat_history_tokens.
|
||||
#AIDER_MAX_CHAT_HISTORY_TOKENS=
|
||||
|
||||
## Specify the .env file to load (default: .env in git root)
|
||||
#AIDER_ENV_FILE=.env
|
||||
|
||||
################
|
||||
# History Files:
|
||||
|
||||
@@ -201,6 +225,18 @@ cog.outl("```")
|
||||
## Set the color for assistant output (default: #0088ff)
|
||||
#AIDER_ASSISTANT_OUTPUT_COLOR=#0088ff
|
||||
|
||||
## Set the color for the completion menu (default: terminal's default text color)
|
||||
#AIDER_COMPLETION_MENU_COLOR=
|
||||
|
||||
## Set the background color for the completion menu (default: terminal's default background color)
|
||||
#AIDER_COMPLETION_MENU_BG_COLOR=
|
||||
|
||||
## Set the color for the current item in the completion menu (default: terminal's default background color)
|
||||
#AIDER_COMPLETION_MENU_CURRENT_COLOR=
|
||||
|
||||
## Set the background color for the current item in the completion menu (default: terminal's default text color)
|
||||
#AIDER_COMPLETION_MENU_CURRENT_BG_COLOR=
|
||||
|
||||
## Set the markdown code theme (default: default, other options include monokai, solarized-dark, solarized-light)
|
||||
#AIDER_CODE_THEME=default
|
||||
|
||||
@@ -249,6 +285,9 @@ cog.outl("```")
|
||||
## Perform a dry run without modifying files (default: False)
|
||||
#AIDER_DRY_RUN=false
|
||||
|
||||
## Skip the sanity check for the git repository (default: False)
|
||||
#AIDER_SKIP_SANITY_CHECK_REPO=false
|
||||
|
||||
########################
|
||||
# Fixing and committing:
|
||||
|
||||
@@ -270,6 +309,18 @@ cog.outl("```")
|
||||
## Run tests and fix problems found
|
||||
#AIDER_TEST=false
|
||||
|
||||
############
|
||||
# Analytics:
|
||||
|
||||
## Enable/disable analytics for one session (default: False)
|
||||
#AIDER_ANALYTICS=false
|
||||
|
||||
## Specify a file to log analytics events
|
||||
#AIDER_ANALYTICS_LOG=
|
||||
|
||||
## Permanently disable analytics
|
||||
#AIDER_ANALYTICS_DISABLE=false
|
||||
|
||||
#################
|
||||
# Other Settings:
|
||||
|
||||
@@ -282,9 +333,6 @@ cog.outl("```")
|
||||
## Use VI editing mode in the terminal (default: False)
|
||||
#AIDER_VIM=false
|
||||
|
||||
## Specify the language for voice using ISO 639-1 code (default: auto)
|
||||
#AIDER_VOICE_LANGUAGE=en
|
||||
|
||||
## Specify the language to use in the chat (default: None, uses system settings)
|
||||
#AIDER_CHAT_LANGUAGE=
|
||||
|
||||
@@ -303,8 +351,11 @@ cog.outl("```")
|
||||
## Apply the changes from the given file instead of running the chat (debug)
|
||||
#AIDER_APPLY=
|
||||
|
||||
## Apply clipboard contents as edits using the main model's editor format
|
||||
#AIDER_APPLY_CLIPBOARD_EDITS=false
|
||||
|
||||
## Always say yes to every confirmation
|
||||
#AIDER_YES=
|
||||
#AIDER_YES_ALWAYS=
|
||||
|
||||
## Enable verbose output
|
||||
#AIDER_VERBOSE=false
|
||||
@@ -324,14 +375,29 @@ cog.outl("```")
|
||||
## Specify a file containing the message to send the LLM, process reply, then exit (disables chat mode)
|
||||
#AIDER_MESSAGE_FILE=
|
||||
|
||||
## Load and execute /commands from a file on launch
|
||||
#AIDER_LOAD=
|
||||
|
||||
## Specify the encoding for input and output (default: utf-8)
|
||||
#AIDER_ENCODING=utf-8
|
||||
|
||||
## Run aider in your browser
|
||||
## Run aider in your browser (default: False)
|
||||
#AIDER_GUI=false
|
||||
|
||||
## Enable/disable suggesting shell commands (default: True)
|
||||
#AIDER_SUGGEST_SHELL_COMMANDS=true
|
||||
|
||||
## Enable/disable fancy input with history and completion (default: True)
|
||||
#AIDER_FANCY_INPUT=true
|
||||
|
||||
#################
|
||||
# Voice Settings:
|
||||
|
||||
## Audio format for voice recording (default: wav). webm and mp3 require ffmpeg
|
||||
#AIDER_VOICE_FORMAT=wav
|
||||
|
||||
## Specify the language for voice using ISO 639-1 code (default: auto)
|
||||
#AIDER_VOICE_LANGUAGE=en
|
||||
```
|
||||
<!--[[[end]]]-->
|
||||
|
||||
|
||||
@@ -26,26 +26,30 @@ cog.out(get_md_help())
|
||||
]]]-->
|
||||
```
|
||||
usage: aider [-h] [--openai-api-key] [--anthropic-api-key] [--model]
|
||||
[--opus] [--sonnet] [--4] [--4o] [--mini] [--4-turbo]
|
||||
[--35turbo] [--deepseek] [--list-models]
|
||||
[--openai-api-base] [--openai-api-type]
|
||||
[--openai-api-version] [--openai-api-deployment-id]
|
||||
[--openai-organization-id] [--model-settings-file]
|
||||
[--model-metadata-file]
|
||||
[--opus] [--sonnet] [--haiku] [--4] [--4o] [--mini]
|
||||
[--4-turbo] [--35turbo] [--deepseek] [--o1-mini]
|
||||
[--o1-preview] [--list-models] [--openai-api-base]
|
||||
[--openai-api-type] [--openai-api-version]
|
||||
[--openai-api-deployment-id] [--openai-organization-id]
|
||||
[--model-settings-file] [--model-metadata-file]
|
||||
[--verify-ssl | --no-verify-ssl] [--edit-format]
|
||||
[--weak-model]
|
||||
[--architect] [--weak-model] [--editor-model]
|
||||
[--editor-edit-format]
|
||||
[--show-model-warnings | --no-show-model-warnings]
|
||||
[--map-tokens] [--map-refresh]
|
||||
[--cache-prompts | --no-cache-prompts]
|
||||
[--cache-keepalive-pings] [--map-multiplier-no-files]
|
||||
[--max-chat-history-tokens] [--env-file]
|
||||
[--cache-prompts | --no-cache-prompts]
|
||||
[--cache-keepalive-pings] [--map-tokens]
|
||||
[--map-refresh] [--map-multiplier-no-files]
|
||||
[--input-history-file] [--chat-history-file]
|
||||
[--restore-chat-history | --no-restore-chat-history]
|
||||
[--llm-history-file] [--dark-mode] [--light-mode]
|
||||
[--pretty | --no-pretty] [--stream | --no-stream]
|
||||
[--user-input-color] [--tool-output-color]
|
||||
[--tool-error-color] [--tool-warning-color]
|
||||
[--assistant-output-color] [--code-theme]
|
||||
[--assistant-output-color] [--completion-menu-color]
|
||||
[--completion-menu-bg-color]
|
||||
[--completion-menu-current-color]
|
||||
[--completion-menu-current-bg-color] [--code-theme]
|
||||
[--show-diffs] [--git | --no-git]
|
||||
[--gitignore | --no-gitignore] [--aiderignore]
|
||||
[--subtree-only] [--auto-commits | --no-auto-commits]
|
||||
@@ -55,15 +59,21 @@ usage: aider [-h] [--openai-api-key] [--anthropic-api-key] [--model]
|
||||
[--attribute-commit-message-author | --no-attribute-commit-message-author]
|
||||
[--attribute-commit-message-committer | --no-attribute-commit-message-committer]
|
||||
[--commit] [--commit-prompt] [--dry-run | --no-dry-run]
|
||||
[--lint] [--lint-cmd] [--auto-lint | --no-auto-lint]
|
||||
[--test-cmd] [--auto-test | --no-auto-test] [--test]
|
||||
[--file] [--read] [--vim] [--voice-language]
|
||||
[--skip-sanity-check-repo] [--lint] [--lint-cmd]
|
||||
[--auto-lint | --no-auto-lint] [--test-cmd]
|
||||
[--auto-test | --no-auto-test] [--test]
|
||||
[--analytics | --no-analytics] [--analytics-log]
|
||||
[--analytics-disable] [--file] [--read] [--vim]
|
||||
[--chat-language] [--version] [--just-check-update]
|
||||
[--check-update | --no-check-update]
|
||||
[--install-main-branch] [--upgrade] [--apply] [--yes]
|
||||
[-v] [--show-repo-map] [--show-prompts] [--exit]
|
||||
[--message] [--message-file] [--encoding] [-c] [--gui]
|
||||
[--install-main-branch] [--upgrade] [--apply]
|
||||
[--apply-clipboard-edits] [--yes-always] [-v]
|
||||
[--show-repo-map] [--show-prompts] [--exit] [--message]
|
||||
[--message-file] [--load] [--encoding] [-c]
|
||||
[--gui | --no-gui | --browser | --no-browser]
|
||||
[--suggest-shell-commands | --no-suggest-shell-commands]
|
||||
[--fancy-input | --no-fancy-input] [--voice-format]
|
||||
[--voice-language]
|
||||
|
||||
```
|
||||
|
||||
@@ -94,9 +104,13 @@ Use claude-3-opus-20240229 model for the main chat
|
||||
Environment variable: `AIDER_OPUS`
|
||||
|
||||
### `--sonnet`
|
||||
Use claude-3-5-sonnet-20240620 model for the main chat
|
||||
Use claude-3-5-sonnet-20241022 model for the main chat
|
||||
Environment variable: `AIDER_SONNET`
|
||||
|
||||
### `--haiku`
|
||||
Use claude-3-5-haiku-20241022 model for the main chat
|
||||
Environment variable: `AIDER_HAIKU`
|
||||
|
||||
### `--4`
|
||||
Use gpt-4-0613 model for the main chat
|
||||
Environment variable: `AIDER_4`
|
||||
@@ -129,6 +143,14 @@ Aliases:
|
||||
Use deepseek/deepseek-coder model for the main chat
|
||||
Environment variable: `AIDER_DEEPSEEK`
|
||||
|
||||
### `--o1-mini`
|
||||
Use o1-mini model for the main chat
|
||||
Environment variable: `AIDER_O1_MINI`
|
||||
|
||||
### `--o1-preview`
|
||||
Use o1-preview model for the main chat
|
||||
Environment variable: `AIDER_O1_PREVIEW`
|
||||
|
||||
## Model Settings:
|
||||
|
||||
### `--list-models MODEL`
|
||||
@@ -183,10 +205,22 @@ Aliases:
|
||||
- `--edit-format EDIT_FORMAT`
|
||||
- `--chat-mode EDIT_FORMAT`
|
||||
|
||||
### `--architect`
|
||||
Use architect edit format for the main chat
|
||||
Environment variable: `AIDER_ARCHITECT`
|
||||
|
||||
### `--weak-model WEAK_MODEL`
|
||||
Specify the model to use for commit messages and chat history summarization (default depends on --model)
|
||||
Environment variable: `AIDER_WEAK_MODEL`
|
||||
|
||||
### `--editor-model EDITOR_MODEL`
|
||||
Specify the model to use for editor tasks (default depends on --model)
|
||||
Environment variable: `AIDER_EDITOR_MODEL`
|
||||
|
||||
### `--editor-edit-format EDITOR_EDIT_FORMAT`
|
||||
Specify the edit format for the editor model (default: depends on editor model)
|
||||
Environment variable: `AIDER_EDITOR_EDIT_FORMAT`
|
||||
|
||||
### `--show-model-warnings`
|
||||
Only work with models that have meta-data available (default: True)
|
||||
Default: True
|
||||
@@ -195,14 +229,16 @@ Aliases:
|
||||
- `--show-model-warnings`
|
||||
- `--no-show-model-warnings`
|
||||
|
||||
### `--map-tokens VALUE`
|
||||
Suggested number of tokens to use for repo map, use 0 to disable (default: 1024)
|
||||
Environment variable: `AIDER_MAP_TOKENS`
|
||||
### `--max-chat-history-tokens VALUE`
|
||||
Soft limit on tokens for chat history, after which summarization begins. If unspecified, defaults to the model's max_chat_history_tokens.
|
||||
Environment variable: `AIDER_MAX_CHAT_HISTORY_TOKENS`
|
||||
|
||||
### `--map-refresh VALUE`
|
||||
Control how often the repo map is refreshed (default: auto)
|
||||
Default: auto
|
||||
Environment variable: `AIDER_MAP_REFRESH`
|
||||
### `--env-file ENV_FILE`
|
||||
Specify the .env file to load (default: .env in git root)
|
||||
Default: .env
|
||||
Environment variable: `AIDER_ENV_FILE`
|
||||
|
||||
## Cache Settings:
|
||||
|
||||
### `--cache-prompts`
|
||||
Enable caching of prompts (default: False)
|
||||
@@ -217,20 +253,22 @@ Number of times to ping at 5min intervals to keep prompt cache warm (default: 0)
|
||||
Default: 0
|
||||
Environment variable: `AIDER_CACHE_KEEPALIVE_PINGS`
|
||||
|
||||
## Repomap Settings:
|
||||
|
||||
### `--map-tokens VALUE`
|
||||
Suggested number of tokens to use for repo map, use 0 to disable (default: 1024)
|
||||
Environment variable: `AIDER_MAP_TOKENS`
|
||||
|
||||
### `--map-refresh VALUE`
|
||||
Control how often the repo map is refreshed. Options: auto, always, files, manual (default: auto)
|
||||
Default: auto
|
||||
Environment variable: `AIDER_MAP_REFRESH`
|
||||
|
||||
### `--map-multiplier-no-files VALUE`
|
||||
Multiplier for map tokens when no files are specified (default: 2)
|
||||
Default: 2
|
||||
Environment variable: `AIDER_MAP_MULTIPLIER_NO_FILES`
|
||||
|
||||
### `--max-chat-history-tokens VALUE`
|
||||
Maximum number of tokens to use for chat history. If not specified, uses the model's max_chat_history_tokens.
|
||||
Environment variable: `AIDER_MAX_CHAT_HISTORY_TOKENS`
|
||||
|
||||
### `--env-file ENV_FILE`
|
||||
Specify the .env file to load (default: .env in git root)
|
||||
Default: .env
|
||||
Environment variable: `AIDER_ENV_FILE`
|
||||
|
||||
## History Files:
|
||||
|
||||
### `--input-history-file INPUT_HISTORY_FILE`
|
||||
@@ -307,6 +345,22 @@ Set the color for assistant output (default: #0088ff)
|
||||
Default: #0088ff
|
||||
Environment variable: `AIDER_ASSISTANT_OUTPUT_COLOR`
|
||||
|
||||
### `--completion-menu-color COLOR`
|
||||
Set the color for the completion menu (default: terminal's default text color)
|
||||
Environment variable: `AIDER_COMPLETION_MENU_COLOR`
|
||||
|
||||
### `--completion-menu-bg-color COLOR`
|
||||
Set the background color for the completion menu (default: terminal's default background color)
|
||||
Environment variable: `AIDER_COMPLETION_MENU_BG_COLOR`
|
||||
|
||||
### `--completion-menu-current-color COLOR`
|
||||
Set the color for the current item in the completion menu (default: terminal's default background color)
|
||||
Environment variable: `AIDER_COMPLETION_MENU_CURRENT_COLOR`
|
||||
|
||||
### `--completion-menu-current-bg-color COLOR`
|
||||
Set the background color for the current item in the completion menu (default: terminal's default text color)
|
||||
Environment variable: `AIDER_COMPLETION_MENU_CURRENT_BG_COLOR`
|
||||
|
||||
### `--code-theme VALUE`
|
||||
Set the markdown code theme (default: default, other options include monokai, solarized-dark, solarized-light)
|
||||
Default: default
|
||||
@@ -410,6 +464,11 @@ Aliases:
|
||||
- `--dry-run`
|
||||
- `--no-dry-run`
|
||||
|
||||
### `--skip-sanity-check-repo`
|
||||
Skip the sanity check for the git repository (default: False)
|
||||
Default: False
|
||||
Environment variable: `AIDER_SKIP_SANITY_CHECK_REPO`
|
||||
|
||||
## Fixing and committing:
|
||||
|
||||
### `--lint`
|
||||
@@ -448,6 +507,25 @@ Run tests and fix problems found
|
||||
Default: False
|
||||
Environment variable: `AIDER_TEST`
|
||||
|
||||
## Analytics:
|
||||
|
||||
### `--analytics`
|
||||
Enable/disable analytics for one session (default: False)
|
||||
Default: False
|
||||
Environment variable: `AIDER_ANALYTICS`
|
||||
Aliases:
|
||||
- `--analytics`
|
||||
- `--no-analytics`
|
||||
|
||||
### `--analytics-log ANALYTICS_LOG_FILE`
|
||||
Specify a file to log analytics events
|
||||
Environment variable: `AIDER_ANALYTICS_LOG`
|
||||
|
||||
### `--analytics-disable`
|
||||
Permanently disable analytics
|
||||
Default: False
|
||||
Environment variable: `AIDER_ANALYTICS_DISABLE`
|
||||
|
||||
## Other Settings:
|
||||
|
||||
### `--file FILE`
|
||||
@@ -463,11 +541,6 @@ Use VI editing mode in the terminal (default: False)
|
||||
Default: False
|
||||
Environment variable: `AIDER_VIM`
|
||||
|
||||
### `--voice-language VOICE_LANGUAGE`
|
||||
Specify the language for voice using ISO 639-1 code (default: auto)
|
||||
Default: en
|
||||
Environment variable: `AIDER_VOICE_LANGUAGE`
|
||||
|
||||
### `--chat-language CHAT_LANGUAGE`
|
||||
Specify the language to use in the chat (default: None, uses system settings)
|
||||
Environment variable: `AIDER_CHAT_LANGUAGE`
|
||||
@@ -505,9 +578,14 @@ Aliases:
|
||||
Apply the changes from the given file instead of running the chat (debug)
|
||||
Environment variable: `AIDER_APPLY`
|
||||
|
||||
### `--yes`
|
||||
### `--apply-clipboard-edits`
|
||||
Apply clipboard contents as edits using the main model's editor format
|
||||
Default: False
|
||||
Environment variable: `AIDER_APPLY_CLIPBOARD_EDITS`
|
||||
|
||||
### `--yes-always`
|
||||
Always say yes to every confirmation
|
||||
Environment variable: `AIDER_YES`
|
||||
Environment variable: `AIDER_YES_ALWAYS`
|
||||
|
||||
### `--verbose`
|
||||
Enable verbose output
|
||||
@@ -547,6 +625,10 @@ Aliases:
|
||||
- `--message-file MESSAGE_FILE`
|
||||
- `-f MESSAGE_FILE`
|
||||
|
||||
### `--load LOAD_FILE`
|
||||
Load and execute /commands from a file on launch
|
||||
Environment variable: `AIDER_LOAD`
|
||||
|
||||
### `--encoding VALUE`
|
||||
Specify the encoding for input and output (default: utf-8)
|
||||
Default: utf-8
|
||||
@@ -559,12 +641,14 @@ Aliases:
|
||||
- `--config CONFIG_FILE`
|
||||
|
||||
### `--gui`
|
||||
Run aider in your browser
|
||||
Run aider in your browser (default: False)
|
||||
Default: False
|
||||
Environment variable: `AIDER_GUI`
|
||||
Aliases:
|
||||
- `--gui`
|
||||
- `--no-gui`
|
||||
- `--browser`
|
||||
- `--no-browser`
|
||||
|
||||
### `--suggest-shell-commands`
|
||||
Enable/disable suggesting shell commands (default: True)
|
||||
@@ -573,4 +657,24 @@ Environment variable: `AIDER_SUGGEST_SHELL_COMMANDS`
|
||||
Aliases:
|
||||
- `--suggest-shell-commands`
|
||||
- `--no-suggest-shell-commands`
|
||||
|
||||
### `--fancy-input`
|
||||
Enable/disable fancy input with history and completion (default: True)
|
||||
Default: True
|
||||
Environment variable: `AIDER_FANCY_INPUT`
|
||||
Aliases:
|
||||
- `--fancy-input`
|
||||
- `--no-fancy-input`
|
||||
|
||||
## Voice Settings:
|
||||
|
||||
### `--voice-format VOICE_FORMAT`
|
||||
Audio format for voice recording (default: wav). webm and mp3 require ffmpeg
|
||||
Default: wav
|
||||
Environment variable: `AIDER_VOICE_FORMAT`
|
||||
|
||||
### `--voice-language VOICE_LANGUAGE`
|
||||
Specify the language for voice using ISO 639-1 code (default: auto)
|
||||
Default: en
|
||||
Environment variable: `AIDER_VOICE_LANGUAGE`
|
||||
<!--[[[end]]]-->
|
||||
|
||||
@@ -112,9 +112,9 @@ like functions and methods also include their signatures.
|
||||
|
||||
Here's a
|
||||
sample of the map of the aider repo, just showing the maps of
|
||||
[main.py](https://github.com/paul-gauthier/aider/blob/main/aider/main.py)
|
||||
[main.py](https://github.com/Aider-AI/aider/blob/main/aider/main.py)
|
||||
and
|
||||
[io.py](https://github.com/paul-gauthier/aider/blob/main/aider/io.py)
|
||||
[io.py](https://github.com/Aider-AI/aider/blob/main/aider/io.py)
|
||||
:
|
||||
|
||||
```
|
||||
|
||||
@@ -30,7 +30,7 @@ current chat to build a compact
|
||||
Adding a bunch of files that are mostly irrelevant to the
|
||||
task at hand will often distract or confuse the LLM.
|
||||
The LLM will give worse coding results, and sometimese even fail to correctly edit files.
|
||||
Addings extra files will also increase the token costs on your OpenAI invoice.
|
||||
Addings extra files will also increase your token costs.
|
||||
|
||||
Again, it's usually best to just add the files to the chat that will need to be modified.
|
||||
If you still wish to add lots of files to the chat, you can:
|
||||
@@ -92,6 +92,40 @@ the functionality you want to use in repo B.
|
||||
Then when you're using aider in repo B, you can
|
||||
`/read` in that script.
|
||||
|
||||
## How do I turn on the repository map?
|
||||
|
||||
Depending on the LLM you are using, aider may launch with the repo map disabled by default:
|
||||
|
||||
```
|
||||
Repo-map: disabled
|
||||
```
|
||||
|
||||
This is because weaker models get easily overwhelmed and confused by the content of the
|
||||
repo map. They sometimes mistakenly try to edit the code in the repo map.
|
||||
The repo map is usually disabled for a good reason.
|
||||
|
||||
If you would like to force it on, you can run aider with `--map-tokens 1024`.
|
||||
|
||||
## How do I include the git history in the context?
|
||||
|
||||
When starting a fresh aider session, you can include recent git history in the chat context. This can be useful for providing the LLM with information about recent changes. To do this:
|
||||
|
||||
1. Use the `/run` command with `git diff` to show recent changes:
|
||||
```
|
||||
/run git diff HEAD~1
|
||||
```
|
||||
This will include the diff of the last commit in the chat history.
|
||||
|
||||
2. To include diffs from multiple commits, increase the number after the tilde:
|
||||
```
|
||||
/run git diff HEAD~3
|
||||
```
|
||||
This will show changes from the last three commits.
|
||||
|
||||
Remember, the chat history already includes recent changes made during the current session, so this tip is most useful when starting a new aider session and you want to provide context about recent work.
|
||||
|
||||
{: .tip }
|
||||
The `/git` command will not work for this purpose, as its output is not included in the chat.
|
||||
|
||||
## How can I run aider locally from source code?
|
||||
|
||||
@@ -99,7 +133,7 @@ To run the project locally, follow these steps:
|
||||
|
||||
```
|
||||
# Clone the repository
|
||||
git clone git@github.com:paul-gauthier/aider.git
|
||||
git clone git@github.com:Aider-AI/aider.git
|
||||
|
||||
# Navigate to the project directory
|
||||
cd aider
|
||||
@@ -116,7 +150,6 @@ python -m aider
|
||||
|
||||
|
||||
|
||||
|
||||
## Can I change the system prompts that aider uses?
|
||||
|
||||
Aider is set up to support different system prompts and edit formats
|
||||
@@ -157,18 +190,55 @@ You can also refer to the
|
||||
[instructions for installing a development version of aider](https://aider.chat/docs/install/optional.html#install-the-development-version-of-aider).
|
||||
|
||||
|
||||
## How are the "aider wrote xx% of code" stats computed?
|
||||
|
||||
[Aider is tightly integrated with git](/docs/git.html) so all
|
||||
one of aider's code changes are committed to the repo with proper attribution.
|
||||
The
|
||||
[stats are computed](https://github.com/Aider-AI/aider/blob/main/scripts/blame.py)
|
||||
by doing something like `git blame` on the repo,
|
||||
and counting up who wrote all the new lines of code in each release.
|
||||
Only lines in source code files are counted, not documentation or prompt files.
|
||||
|
||||
## Can I share my aider chat transcript?
|
||||
|
||||
Yes, you can now share aider chat logs in a pretty way.
|
||||
|
||||
1. Copy the markdown logs you want to share from `.aider.chat.history.md` and make a github gist. Or publish the raw markdown logs on the web any way you'd like.
|
||||
|
||||
https://gist.github.com/paul-gauthier/2087ab8b64034a078c0a209440ac8be0
|
||||
```
|
||||
https://gist.github.com/Aider-AI/2087ab8b64034a078c0a209440ac8be0
|
||||
```
|
||||
|
||||
2. Take the gist URL and append it to:
|
||||
|
||||
https://aider.chat/share/?mdurl=
|
||||
```
|
||||
https://aider.chat/share/?mdurl=
|
||||
```
|
||||
|
||||
This will give you a URL like this, which shows the chat history like you'd see in a terminal:
|
||||
|
||||
https://aider.chat/share/?mdurl=https://gist.github.com/paul-gauthier/2087ab8b64034a078c0a209440ac8be0
|
||||
```
|
||||
https://aider.chat/share/?mdurl=https://gist.github.com/Aider-AI/2087ab8b64034a078c0a209440ac8be0
|
||||
```
|
||||
|
||||
## Can I edit files myself while aider is running?
|
||||
|
||||
Yes. Aider always reads the latest copy of files from the file
|
||||
system when you send each message.
|
||||
|
||||
While you're waiting for aider's reply to complete, it's probably unwise to
|
||||
edit files that you've added to the chat.
|
||||
Your edits and aider's edits might conflict.
|
||||
|
||||
## What is Aider AI LLC?
|
||||
|
||||
Aider AI LLC is the company behind the aider AI coding tool.
|
||||
Aider is
|
||||
[open source and available on GitHub](https://github.com/Aider-AI/aider)
|
||||
under an
|
||||
[Apache 2.0 license](https://github.com/Aider-AI/aider/blob/main/LICENSE.txt).
|
||||
|
||||
|
||||
<div style="height:80vh"></div>
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
parent: More info
|
||||
nav_order: 800
|
||||
nav_order: 100
|
||||
description: Aider is tightly integrated with git.
|
||||
---
|
||||
|
||||
@@ -22,9 +22,16 @@ This keeps your edits separate from aider's edits, and makes sure you never lose
|
||||
|
||||
## In-chat commands
|
||||
|
||||
Aider also allows you to use in-chat commands to `/diff` or `/undo` the last change.
|
||||
To do more complex management of your git history, you cat use raw `git` commands,
|
||||
either by using `/git` within the chat, or with standard git tools outside of aider.
|
||||
Aider also allows you to use
|
||||
[in-chat commands](/docs/usage/commands.html)
|
||||
to perform git operations:
|
||||
|
||||
- `/diff` will show all the file changes since the last message you sent.
|
||||
- `/undo` will undo and discard the last change.
|
||||
- `/commit` to commit all dirty changes with a sensible commit message.
|
||||
- `/git` will let you run raw git commands to do more complex management of your git history.
|
||||
|
||||
You can also manage your git history outside of aider with your preferred git tools.
|
||||
|
||||
## Disabling git integration
|
||||
|
||||
@@ -36,15 +43,18 @@ While it is not recommended, you can disable aider's use of git in a few ways:
|
||||
|
||||
## Commit messages
|
||||
|
||||
Aider sends the `--weak-model` a copy of the diffs and the chat history
|
||||
and asks it to produce a commit message.
|
||||
By default, aider creates commit messages which follow
|
||||
[Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/).
|
||||
|
||||
You can customize the
|
||||
[commit prompt](https://github.com/paul-gauthier/aider/blob/main/aider/prompts.py#L5)
|
||||
[commit prompt](https://github.com/Aider-AI/aider/blob/main/aider/prompts.py#L5)
|
||||
with the `--commit-prompt` option.
|
||||
You can place that on the command line, or
|
||||
[configure it via a config file or environment variables](https://aider.chat/docs/config.html).
|
||||
|
||||
|
||||
## Commit attribution
|
||||
|
||||
Aider marks commits that it either authored or committed.
|
||||
|
||||
@@ -9,6 +9,10 @@ nav_order: 10
|
||||
- TOC
|
||||
{:toc}
|
||||
|
||||
## Python version
|
||||
|
||||
Aider currently works with python 3.9-3.12.
|
||||
|
||||
## Install git
|
||||
|
||||
Make sure you have git installed.
|
||||
@@ -31,7 +35,7 @@ To work with Anthropic's models like Claude 3.5 Sonnet you need a paid
|
||||
|
||||
```
|
||||
# Install aider
|
||||
python -m pip install aider-chat
|
||||
python -m pip install -U --upgrade-strategy only-if-needed aider-chat
|
||||
|
||||
# To work with GPT-4o:
|
||||
$ aider --4o --openai-api-key sk-xxx...
|
||||
@@ -44,7 +48,7 @@ $ aider --sonnet --anthropic-api-key sk-xxx...
|
||||
|
||||
```
|
||||
# Install aider
|
||||
python -m pip install aider-chat
|
||||
python -m pip install -U --upgrade-strategy only-if-needed aider-chat
|
||||
|
||||
# To work with GPT-4o:
|
||||
$ aider --4o --openai-api-key sk-xxx...
|
||||
|
||||
@@ -74,15 +74,11 @@ joshuavial also confirmed that aider works inside a VS Code terminal window.
|
||||
Aider detects if it is running inside VSCode and turns off pretty/color output,
|
||||
since the VSCode terminal doesn't seem to support it well.
|
||||
|
||||
[MattFlower](https://github.com/MattFlower) provided a VSCode plugin for aider:
|
||||
|
||||
[https://marketplace.visualstudio.com/items?itemName=MattFlower.aider](https://marketplace.visualstudio.com/items?itemName=MattFlower.aider)
|
||||
|
||||
### Other editors
|
||||
|
||||
If you are interested in creating an aider plugin for your favorite editor,
|
||||
please let me know by opening a
|
||||
[GitHub issue](https://github.com/paul-gauthier/aider/issues).
|
||||
[GitHub issue](https://github.com/Aider-AI/aider/issues).
|
||||
|
||||
|
||||
## Install the development version of aider
|
||||
@@ -91,7 +87,7 @@ If you want the very latest development version of aider
|
||||
you can install directly from GitHub:
|
||||
|
||||
```
|
||||
python -m pip install --upgrade git+https://github.com/paul-gauthier/aider.git
|
||||
python -m pip install --upgrade git+https://github.com/Aider-AI/aider.git
|
||||
```
|
||||
|
||||
If you've git cloned the aider repository already, you can install "live" from your local copy. This is mostly useful if you are developing aider and want your current modifications to take effect immediately.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
parent: More info
|
||||
nav_order: 900
|
||||
nav_order: 200
|
||||
description: Aider supports pretty much all popular coding languages.
|
||||
---
|
||||
# Supported languages
|
||||
@@ -33,7 +33,7 @@ then it should be possible to add repo map support.
|
||||
To build a repo map, aider needs the `tags.scm` file
|
||||
from the given language's tree-sitter grammar.
|
||||
If you can find and share that file in a
|
||||
[GitHub issue](https://github.com/paul-gauthier/aider/issues),
|
||||
[GitHub issue](https://github.com/Aider-AI/aider/issues),
|
||||
then it may be possible to add repo map support.
|
||||
|
||||
If aider doesn't support linting, it will be complicated to
|
||||
|
||||
@@ -55,14 +55,85 @@ The model also has to successfully apply all its changes to the source file with
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
<canvas id="editChart" width="800" height="450" style="margin-top: 20px"></canvas>
|
||||
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
|
||||
<script>
|
||||
document.addEventListener('DOMContentLoaded', function () {
|
||||
var ctx = document.getElementById('editChart').getContext('2d');
|
||||
const HIGHTLIGHT_MODEL = 'no no no no';
|
||||
var leaderboardData = {
|
||||
labels: [],
|
||||
datasets: [{
|
||||
label: 'Percent completed correctly',
|
||||
data: [],
|
||||
backgroundColor: function(context) {
|
||||
const label = context.chart.data.labels[context.dataIndex] || '';
|
||||
return (label && label.includes(HIGHTLIGHT_MODEL)) ? 'rgba(255, 99, 132, 0.2)' : 'rgba(54, 162, 235, 0.2)';
|
||||
},
|
||||
borderColor: function(context) {
|
||||
const label = context.chart.data.labels[context.dataIndex] || '';
|
||||
return (label && label.includes(HIGHTLIGHT_MODEL)) ? 'rgba(255, 99, 132, 1)' : 'rgba(54, 162, 235, 1)';
|
||||
},
|
||||
borderWidth: 1
|
||||
}]
|
||||
};
|
||||
|
||||
{% include leaderboard_graph.html
|
||||
chart_id="editChart"
|
||||
data=edit_sorted
|
||||
row_prefix="edit-row"
|
||||
pass_rate_key="pass_rate_2"
|
||||
%}
|
||||
var allData = [];
|
||||
{% for row in edit_sorted %}
|
||||
allData.push({
|
||||
model: '{{ row.model }}',
|
||||
pass_rate_2: {{ row.pass_rate_2 }},
|
||||
percent_cases_well_formed: {{ row.percent_cases_well_formed }}
|
||||
});
|
||||
{% endfor %}
|
||||
|
||||
function updateChart() {
|
||||
var selectedRows = document.querySelectorAll('tr.selected');
|
||||
var showAll = selectedRows.length === 0;
|
||||
|
||||
leaderboardData.labels = [];
|
||||
leaderboardData.datasets[0].data = [];
|
||||
|
||||
allData.forEach(function(row, index) {
|
||||
var rowElement = document.getElementById('edit-row-' + index);
|
||||
if (showAll) {
|
||||
rowElement.classList.remove('selected');
|
||||
}
|
||||
if (showAll || rowElement.classList.contains('selected')) {
|
||||
leaderboardData.labels.push(row.model);
|
||||
leaderboardData.datasets[0].data.push(row.pass_rate_2);
|
||||
}
|
||||
});
|
||||
|
||||
leaderboardChart.update();
|
||||
}
|
||||
|
||||
var tableBody = document.querySelector('table tbody');
|
||||
allData.forEach(function(row, index) {
|
||||
var tr = tableBody.children[index];
|
||||
tr.id = 'edit-row-' + index;
|
||||
tr.style.cursor = 'pointer';
|
||||
tr.onclick = function() {
|
||||
this.classList.toggle('selected');
|
||||
updateChart();
|
||||
};
|
||||
});
|
||||
|
||||
var leaderboardChart = new Chart(ctx, {
|
||||
type: 'bar',
|
||||
data: leaderboardData,
|
||||
options: {
|
||||
scales: {
|
||||
y: {
|
||||
beginAtZero: true
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
updateChart();
|
||||
});
|
||||
</script>
|
||||
<style>
|
||||
tr.selected {
|
||||
color: #0056b3;
|
||||
@@ -81,7 +152,7 @@ The model also has to successfully apply all its changes to the source file with
|
||||
|
||||
## Code refactoring leaderboard
|
||||
|
||||
[Aider's refactoring benchmark](https://github.com/paul-gauthier/refactor-benchmark) asks the LLM to refactor 89 large methods from large python classes. This is a more challenging benchmark, which tests the model's ability to output long chunks of code without skipping sections or making mistakes. It was developed to provoke and measure [GPT-4 Turbo's "lazy coding" habit](/2023/12/21/unified-diffs.html).
|
||||
[Aider's refactoring benchmark](https://github.com/Aider-AI/refactor-benchmark) asks the LLM to refactor 89 large methods from large python classes. This is a more challenging benchmark, which tests the model's ability to output long chunks of code without skipping sections or making mistakes. It was developed to provoke and measure [GPT-4 Turbo's "lazy coding" habit](/2023/12/21/unified-diffs.html).
|
||||
|
||||
The refactoring benchmark requires a large context window to
|
||||
work with large source files.
|
||||
@@ -111,12 +182,78 @@ Therefore, results are available for fewer models.
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
{% include leaderboard_graph.html
|
||||
chart_id="refacChart"
|
||||
data=refac_sorted
|
||||
row_prefix="refac-row"
|
||||
pass_rate_key="pass_rate_1"
|
||||
%}
|
||||
<canvas id="refacChart" width="800" height="450" style="margin-top: 20px"></canvas>
|
||||
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
|
||||
<script>
|
||||
document.addEventListener('DOMContentLoaded', function () {
|
||||
var ctx = document.getElementById('refacChart').getContext('2d');
|
||||
var leaderboardData = {
|
||||
labels: [],
|
||||
datasets: [{
|
||||
label: 'Percent completed correctly',
|
||||
data: [],
|
||||
backgroundColor: 'rgba(54, 162, 235, 0.2)',
|
||||
borderColor: 'rgba(54, 162, 235, 1)',
|
||||
borderWidth: 1
|
||||
}]
|
||||
};
|
||||
|
||||
var allData = [];
|
||||
{% for row in refac_sorted %}
|
||||
allData.push({
|
||||
model: '{{ row.model }}',
|
||||
pass_rate_1: {{ row.pass_rate_1 }},
|
||||
percent_cases_well_formed: {{ row.percent_cases_well_formed }}
|
||||
});
|
||||
{% endfor %}
|
||||
|
||||
function updateChart() {
|
||||
var selectedRows = document.querySelectorAll('tr.selected');
|
||||
var showAll = selectedRows.length === 0;
|
||||
|
||||
leaderboardData.labels = [];
|
||||
leaderboardData.datasets[0].data = [];
|
||||
|
||||
allData.forEach(function(row, index) {
|
||||
var rowElement = document.getElementById('refac-row-' + index);
|
||||
if (showAll) {
|
||||
rowElement.classList.remove('selected');
|
||||
}
|
||||
if (showAll || rowElement.classList.contains('selected')) {
|
||||
leaderboardData.labels.push(row.model);
|
||||
leaderboardData.datasets[0].data.push(row.pass_rate_1);
|
||||
}
|
||||
});
|
||||
|
||||
leaderboardChart.update();
|
||||
}
|
||||
|
||||
var tableBody = document.querySelectorAll('table tbody')[1];
|
||||
allData.forEach(function(row, index) {
|
||||
var tr = tableBody.children[index];
|
||||
tr.id = 'refac-row-' + index;
|
||||
tr.style.cursor = 'pointer';
|
||||
tr.onclick = function() {
|
||||
this.classList.toggle('selected');
|
||||
updateChart();
|
||||
};
|
||||
});
|
||||
|
||||
var leaderboardChart = new Chart(ctx, {
|
||||
type: 'bar',
|
||||
data: leaderboardData,
|
||||
options: {
|
||||
scales: {
|
||||
y: {
|
||||
beginAtZero: true
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
updateChart();
|
||||
});
|
||||
</script>
|
||||
|
||||
|
||||
## LLM code editing skill by model release date
|
||||
@@ -151,10 +288,10 @@ since it is the easiest format for an LLM to use.
|
||||
|
||||
Contributions of benchmark results are welcome!
|
||||
See the
|
||||
[benchmark README](https://github.com/paul-gauthier/aider/blob/main/benchmark/README.md)
|
||||
[benchmark README](https://github.com/Aider-AI/aider/blob/main/benchmark/README.md)
|
||||
for information on running aider's code editing benchmarks.
|
||||
Submit results by opening a PR with edits to the
|
||||
[benchmark results data files](https://github.com/paul-gauthier/aider/blob/main/aider/website/_data/).
|
||||
[benchmark results data files](https://github.com/Aider-AI/aider/blob/main/aider/website/_data/).
|
||||
|
||||
|
||||
<p class="post-date">
|
||||
@@ -181,6 +318,6 @@ mod_dates = [get_last_modified_date(file) for file in files]
|
||||
latest_mod_date = max(mod_dates)
|
||||
cog.out(f"{latest_mod_date.strftime('%B %d, %Y.')}")
|
||||
]]]-->
|
||||
September 21, 2024.
|
||||
November 11, 2024.
|
||||
<!--[[[end]]]-->
|
||||
</p>
|
||||
|
||||
111
aider/website/docs/legal/contributor-agreement.md
Normal file
111
aider/website/docs/legal/contributor-agreement.md
Normal file
@@ -0,0 +1,111 @@
|
||||
|
||||
Individual Contributor License Agreement
|
||||
|
||||
Thank you for your interest in Aider AI LLC ("Aider AI").
|
||||
To clarify the intellectual property license
|
||||
granted with Contributions from any person or entity, Aider AI
|
||||
must have on file a signed Contributor License Agreement ("CLA")
|
||||
from each Contributor, indicating agreement with the license
|
||||
terms below. This agreement is for your protection as a Contributor
|
||||
as well as the protection of Aider AI and its users. It does not
|
||||
change your rights to use your own Contributions for any other purpose.
|
||||
|
||||
Please complete and sign this Agreement. Read this document carefully
|
||||
before signing and keep a copy for your records.
|
||||
|
||||
You accept and agree to the following terms and conditions for Your
|
||||
Contributions (present and future) that you submit to Aider AI.
|
||||
Except for the license granted herein to Aider AI and recipients
|
||||
of software distributed by Aider AI, You reserve all right, title,
|
||||
and interest in and to Your Contributions.
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"You" (or "Your") shall mean the copyright owner or legal entity
|
||||
authorized by the copyright owner that is making this Agreement
|
||||
with Aider AI. For legal entities, the entity making a
|
||||
Contribution and all other entities that control, are controlled
|
||||
by, or are under common control with that entity are considered to
|
||||
be a single Contributor. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"Contribution" shall mean any original work of authorship,
|
||||
including any modifications or additions to an existing work, that
|
||||
is intentionally submitted by You to Aider AI for inclusion
|
||||
in, or documentation of, any of the products owned or managed by
|
||||
Aider AI (the "Work"). For the purposes of this definition,
|
||||
"submitted" means any form of electronic, verbal, or written
|
||||
communication sent to Aider AI or its representatives,
|
||||
including but not limited to communication on electronic mailing
|
||||
lists, source code control systems, and issue tracking systems that
|
||||
are managed by, or on behalf of, Aider AI for the purpose of
|
||||
discussing and improving the Work, but excluding communication that
|
||||
is conspicuously marked or otherwise designated in writing by You
|
||||
as "Not a Contribution."
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this Agreement, You hereby grant to Aider AI and to
|
||||
recipients of software distributed by Aider AI a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare derivative works of,
|
||||
publicly display, publicly perform, sublicense, and distribute Your
|
||||
Contributions and such derivative works.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this Agreement, You hereby grant to Aider AI and to
|
||||
recipients of software distributed by Aider AI a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have
|
||||
made, use, offer to sell, sell, import, and otherwise transfer the
|
||||
Work, where such license applies only to those patent claims
|
||||
licensable by You that are necessarily infringed by Your
|
||||
Contribution(s) alone or by combination of Your Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If any
|
||||
entity institutes patent litigation against You or any other entity
|
||||
(including a cross-claim or counterclaim in a lawsuit) alleging
|
||||
that your Contribution, or the Work to which you have contributed,
|
||||
constitutes direct or contributory patent infringement, then any
|
||||
patent licenses granted to that entity under this Agreement for
|
||||
that Contribution or Work shall terminate as of the date such
|
||||
litigation is filed.
|
||||
|
||||
4. You represent that you are legally entitled to grant the above
|
||||
license. If your employer(s) has rights to intellectual property
|
||||
that you create that includes your Contributions, you represent
|
||||
that you have received permission to make Contributions on behalf
|
||||
of that employer, that your employer has waived such rights for
|
||||
your Contributions to Aider AI, or that your employer has
|
||||
executed a separate Corporate CLA with Aider AI.
|
||||
|
||||
5. You represent that each of Your Contributions is Your original
|
||||
creation (see section 7 for submissions on behalf of others). You
|
||||
represent that Your Contribution submissions include complete
|
||||
details of any third-party license or other restriction (including,
|
||||
but not limited to, related patents and trademarks) of which you
|
||||
are personally aware and which are associated with any part of Your
|
||||
Contributions.
|
||||
|
||||
6. You are not expected to provide support for Your Contributions,
|
||||
except to the extent You desire to provide support. You may provide
|
||||
support for free, for a fee, or not at all. Unless required by
|
||||
applicable law or agreed to in writing, You provide Your
|
||||
Contributions on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS
|
||||
OF ANY KIND, either express or implied, including, without
|
||||
limitation, any warranties or conditions of TITLE, NON-
|
||||
INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE.
|
||||
|
||||
7. Should You wish to submit work that is not Your original creation,
|
||||
You may submit it to Aider AI separately from any
|
||||
Contribution, identifying the complete details of its source and of
|
||||
any license or other restriction (including, but not limited to,
|
||||
related patents, trademarks, and license agreements) of which you
|
||||
are personally aware, and conspicuously marking the work as
|
||||
"Submitted on behalf of a third-party: [named here]".
|
||||
|
||||
8. You agree to notify Aider AI of any facts or circumstances of
|
||||
which you become aware that would make these representations
|
||||
inaccurate in any respect.
|
||||
|
||||
104
aider/website/docs/legal/privacy.md
Normal file
104
aider/website/docs/legal/privacy.md
Normal file
@@ -0,0 +1,104 @@
|
||||
---
|
||||
parent: More info
|
||||
nav_order: 500
|
||||
---
|
||||
|
||||
# Privacy policy
|
||||
|
||||
[Aider AI LLC](/docs/faq.html#what-is-aider-ai-llc)
|
||||
(“Aider,” “we,” “our,” and/or “us”) values the privacy of individuals who use our website, programming tools, and related services (collectively, our “Services”). This privacy policy (the “Privacy Policy”) explains how we collect, use, and disclose information from users of our Services. By using our Services, you agree to the collection, use, disclosure, and procedures this Privacy Policy describes.
|
||||
|
||||
### Information We Collect
|
||||
|
||||
We may collect a variety of information from or about you or your devices from various sources, as described below.
|
||||
|
||||
### A. Information You Provide to Us.
|
||||
|
||||
**Communications.** If you contact us directly, we may receive additional information about you, such as your name, email address, the contents of a message or attachments that you may send to us, and other information you choose to provide.
|
||||
|
||||
### B. Information We Collect When You Use Our Services.
|
||||
|
||||
**Device Information.** We may receive information about the device and software you use to access our Services, including IP address, device type, device identifiers, web browser type and version, and operating system version.
|
||||
|
||||
**Usage Information.** We may automatically receive information about your interactions with our Services, like the pages or other content you view, referrer information (the website you visited before coming to our Services), and the dates and times of your visits.
|
||||
|
||||
**Analytics Information.** If you use our programming tools, we may receive information about your interactions with the tools, such as how often certain features or commands are used, information about exceptions and errors, and which large language models are used. This information is associated with a randomly generated identifier, not any directly identifiable user information such as your name or email address. Please see the “Your Choices” section below for information on how to disable the collection of this information.
|
||||
|
||||
**Information from Cookies and Other Tracking Technologies.** We and our third-party partners may collect information about your activities on our Services using cookies, pixel tags, SDKs, or other tracking technologies. Our third-party partners, such as analytics and security partners, may also use these technologies to collect information about your online activities over time and across different services.
|
||||
|
||||
|
||||
### How We Use the Information We Collect
|
||||
|
||||
We use the information we collect:
|
||||
|
||||
- To provide, maintain, improve, and enhance our Services;
|
||||
- To understand and analyze how you use our Services and develop new products, services, features, and functionality;
|
||||
- To communicate with you, provide you with updates and other information relating to our Services, provide information that you request, respond to comments and questions, and otherwise provide customer support;
|
||||
- To generate anonymized or aggregate data containing only de-identified, non-personal information that we may use for any lawful purposes such as to publish reports;
|
||||
- To find and prevent fraud and abuse, and respond to trust and safety issues that may arise;
|
||||
- For compliance purposes, including enforcing our legal rights, or as may be required by applicable laws and regulations or requested by any judicial process or governmental agency; and
|
||||
- For other purposes for which we provide specific notice at the time the information is collected.
|
||||
|
||||
### How We Disclose the Information We Collect
|
||||
|
||||
**Affiliates.** We may disclose any information we receive to our current or future affiliates for any of the purposes described in this Privacy Policy.
|
||||
|
||||
**Vendors and Service Providers.** We may disclose any information we receive to vendors and service providers retained in connection with the provision of our Services.
|
||||
|
||||
**Analytics Partners.** We may use analytics services to collect and process certain analytics data to improve our Services, such as by improving the ability of our programming tools to work with LLMs, edit code, and complete user requests.
|
||||
|
||||
**As Required By Law and Similar Disclosures.** We may access, preserve, and disclose your information if we believe doing so is required or appropriate to: (a) comply with law enforcement requests and legal process, such as a court order or subpoena; (b) respond to your requests; or (c) protect your, our, or others’ rights, property, or safety. For the avoidance of doubt, the disclosure of your information may occur if you post any objectionable content on or through the Services.
|
||||
|
||||
**Merger, Sale, or Other Asset Transfers.** We may transfer your information to service providers, advisors, potential transactional partners, or other third parties in connection with the consideration, negotiation, or completion of a corporate transaction in which we are acquired by or merged with another company or we sell, liquidate, or transfer all or a portion of our assets. The use of your information following any of these events will be governed by the provisions of this Privacy Policy in effect at the time the applicable information was collected.
|
||||
|
||||
**Consent.** We may also disclose your information with your permission.
|
||||
|
||||
### Your Choices
|
||||
|
||||
**Analytics Information.** You can turn off analytics collection when using our programming tools. Please visit this
|
||||
[documentation page](/docs/more/analytics.html)
|
||||
for more information about the data collected and your options.
|
||||
|
||||
### Third Parties
|
||||
|
||||
Our Services may contain links to other websites, products, or services that we do not own or operate. We are not responsible for the privacy practices of these third parties. Please be aware that this Privacy Policy does not apply to your activities on these third-party services or any information you disclose to these third parties. We encourage you to read their privacy policies before providing any information to them.
|
||||
|
||||
### Security
|
||||
|
||||
We make reasonable efforts to protect your information by using physical and electronic safeguards designed to improve the security of the information we maintain. However, because no electronic transmission or storage of information can be entirely secure, we can make no guarantees as to the security or privacy of your information.
|
||||
|
||||
### Children’s Privacy
|
||||
|
||||
We do not knowingly collect, maintain, or use personal information from children under 18 years of age, and no part of our Service(s) is directed to children. If you learn that a child has provided us with personal information in violation of this Privacy Policy, then you may alert us at [INSERT EMAIL ADDRESS].
|
||||
|
||||
### International Visitors
|
||||
|
||||
Our Services are hosted in the United States and intended for visitors located within the United States. If you choose to use the Services from the European Union or other regions of the world with laws governing data collection and use that may differ from U.S. law, then please note that you are transferring your personal information outside of those regions to the U.S. for storage and processing. We may also transfer your data from the U.S. to other countries or regions in connection with storage and processing of data, fulfilling your requests, and operating the Services. By providing any information, including personal information, on or to the Services, you consent to such transfer, storage, and processing.
|
||||
|
||||
|
||||
### Changes to this Privacy Policy
|
||||
|
||||
We will post any adjustments to the Privacy Policy on this page, and the revised version will be effective when it is posted. If we materially change the ways in which we use or disclose personal information previously collected from you through the Services, we will notify you through the Services, by email, or other communication.
|
||||
|
||||
### Contact Information
|
||||
|
||||
If you have any questions, comments, or concerns about our processing activities, please email us at privacy@aider.chat.
|
||||
|
||||
----
|
||||
|
||||
<p class="post-date">
|
||||
Last updated
|
||||
<!--[[[cog
|
||||
import subprocess
|
||||
import datetime
|
||||
|
||||
result = subprocess.run(['git', 'log', '-1', '--format=%ct', 'aider/website/docs/legal/privacy.md'], capture_output=True, text=True)
|
||||
if result.returncode == 0:
|
||||
timestamp = int(result.stdout.strip())
|
||||
date = datetime.datetime.fromtimestamp(timestamp)
|
||||
cog.out(f"{date.strftime('%B %d, %Y.')}")
|
||||
]]]-->
|
||||
October 31, 2024.
|
||||
<!--[[[end]]]-->
|
||||
|
||||
</p>
|
||||
@@ -14,7 +14,7 @@ Aider has some built in shortcuts for the most popular Anthropic models and
|
||||
has been tested and benchmarked to work well with them:
|
||||
|
||||
```
|
||||
python -m pip install aider-chat
|
||||
python -m pip install -U aider-chat
|
||||
|
||||
export ANTHROPIC_API_KEY=<key> # Mac/Linux
|
||||
setx ANTHROPIC_API_KEY <key> # Windows, restart shell after setx
|
||||
|
||||
@@ -8,7 +8,7 @@ nav_order: 500
|
||||
Aider can connect to the OpenAI models on Azure.
|
||||
|
||||
```
|
||||
python -m pip install aider-chat
|
||||
python -m pip install -U aider-chat
|
||||
|
||||
# Mac/Linux:
|
||||
export AZURE_API_KEY=<key>
|
||||
|
||||
@@ -39,6 +39,14 @@ export AWS_PROFILE=your-profile
|
||||
You can add these to your
|
||||
[.env file](/docs/config/dotenv.html).
|
||||
|
||||
## Bedrock with `pipx` installation
|
||||
|
||||
The AWS Bedrock provider requires the `boto3` package in order to function correctly. To use aider installed via `pipx` with AWS Bedrock, you must add the `boto3` dependency to aider's virtual environment by running
|
||||
|
||||
```
|
||||
pipx inject aider boto3
|
||||
```
|
||||
|
||||
|
||||
## Running Aider with Bedrock
|
||||
|
||||
|
||||
@@ -13,7 +13,7 @@ You'll need a [Cohere API key](https://dashboard.cohere.com/welcome/login).
|
||||
To use **Command-R+**:
|
||||
|
||||
```
|
||||
python -m pip install aider-chat
|
||||
python -m pip install -U aider-chat
|
||||
|
||||
export COHERE_API_KEY=<key> # Mac/Linux
|
||||
setx COHERE_API_KEY <key> # Windows, restart shell after setx
|
||||
|
||||
@@ -9,7 +9,7 @@ Aider can connect to the DeepSeek.com API.
|
||||
The DeepSeek Coder V2 model has a top score on aider's code editing benchmark.
|
||||
|
||||
```
|
||||
python -m pip install aider-chat
|
||||
python -m pip install -U aider-chat
|
||||
|
||||
export DEEPSEEK_API_KEY=<key> # Mac/Linux
|
||||
setx DEEPSEEK_API_KEY <key> # Windows, restart shell after setx
|
||||
|
||||
@@ -12,7 +12,7 @@ with code editing capability that's comparable to GPT-3.5.
|
||||
You'll need a [Gemini API key](https://aistudio.google.com/app/u/2/apikey).
|
||||
|
||||
```
|
||||
python -m pip install aider-chat
|
||||
python -m pip install -U aider-chat
|
||||
|
||||
export GEMINI_API_KEY=<key> # Mac/Linux
|
||||
setx GEMINI_API_KEY <key> # Windows, restart shell after setx
|
||||
|
||||
@@ -13,7 +13,7 @@ You'll need a [Groq API key](https://console.groq.com/keys).
|
||||
To use **Llama3 70B**:
|
||||
|
||||
```
|
||||
python -m pip install aider-chat
|
||||
python -m pip install -U aider-chat
|
||||
|
||||
export GROQ_API_KEY=<key> # Mac/Linux
|
||||
setx GROQ_API_KEY <key> # Windows, restart shell after setx
|
||||
|
||||
@@ -15,7 +15,7 @@ ollama pull <model>
|
||||
ollama serve
|
||||
|
||||
# In another terminal window...
|
||||
python -m pip install aider-chat
|
||||
python -m pip install -U aider-chat
|
||||
|
||||
export OLLAMA_API_BASE=http://127.0.0.1:11434 # Mac/Linux
|
||||
setx OLLAMA_API_BASE http://127.0.0.1:11434 # Windows, restart shell after setx
|
||||
|
||||
@@ -8,7 +8,7 @@ nav_order: 500
|
||||
Aider can connect to any LLM which is accessible via an OpenAI compatible API endpoint.
|
||||
|
||||
```
|
||||
python -m pip install aider-chat
|
||||
python -m pip install -U aider-chat
|
||||
|
||||
# Mac/Linux:
|
||||
export OPENAI_API_BASE=<endpoint>
|
||||
|
||||
@@ -14,7 +14,7 @@ Aider has some built in shortcuts for the most popular OpenAI models and
|
||||
has been tested and benchmarked to work well with them:
|
||||
|
||||
```
|
||||
python -m pip install aider-chat
|
||||
python -m pip install -U aider-chat
|
||||
|
||||
export OPENAI_API_KEY=<key> # Mac/Linux
|
||||
setx OPENAI_API_KEY <key> # Windows, restart shell after setx
|
||||
@@ -22,12 +22,18 @@ setx OPENAI_API_KEY <key> # Windows, restart shell after setx
|
||||
# Aider uses gpt-4o by default (or use --4o)
|
||||
aider
|
||||
|
||||
# GPT-4 Turbo (1106)
|
||||
aider --4-turbo
|
||||
# GPT-4o
|
||||
aider --4o
|
||||
|
||||
# GPT-3.5 Turbo
|
||||
aider --35-turbo
|
||||
|
||||
# o1-mini
|
||||
aider --model o1-mini
|
||||
|
||||
# o1-preview
|
||||
aider --model o1-preview
|
||||
|
||||
# List models available from OpenAI
|
||||
aider --list-models openai/
|
||||
```
|
||||
|
||||
@@ -9,7 +9,7 @@ Aider can connect to [models provided by OpenRouter](https://openrouter.ai/model
|
||||
You'll need an [OpenRouter API key](https://openrouter.ai/keys).
|
||||
|
||||
```
|
||||
python -m pip install aider-chat
|
||||
python -m pip install -U aider-chat
|
||||
|
||||
export OPENROUTER_API_KEY=<key> # Mac/Linux
|
||||
setx OPENROUTER_API_KEY <key> # Windows, restart shell after setx
|
||||
@@ -24,7 +24,7 @@ aider --list-models openrouter/
|
||||
In particular, many aider users access Sonnet via OpenRouter:
|
||||
|
||||
```
|
||||
python -m pip install aider-chat
|
||||
python -m pip install -U aider-chat
|
||||
|
||||
export OPENROUTER_API_KEY=<key> # Mac/Linux
|
||||
setx OPENROUTER_API_KEY <key> # Windows, restart shell after setx
|
||||
|
||||
@@ -84,16 +84,17 @@ cog.out(''.join(lines))
|
||||
- NVIDIA_NIM_API_KEY
|
||||
- OLLAMA_API_KEY
|
||||
- OPENAI_API_KEY
|
||||
- OPENAI_LIKE_API_KEY
|
||||
- OPENROUTER_API_KEY
|
||||
- OR_API_KEY
|
||||
- PALM_API_KEY
|
||||
- PERPLEXITYAI_API_KEY
|
||||
- PREDIBASE_API_KEY
|
||||
- PROVIDER_API_KEY
|
||||
- QDRANT_API_KEY
|
||||
- REPLICATE_API_KEY
|
||||
- TOGETHERAI_API_KEY
|
||||
- VOLCENGINE_API_KEY
|
||||
- VOYAGE_API_KEY
|
||||
- XAI_API_KEY
|
||||
- XINFERENCE_API_KEY
|
||||
<!--[[[end]]]-->
|
||||
|
||||
110
aider/website/docs/more/analytics.md
Normal file
110
aider/website/docs/more/analytics.md
Normal file
@@ -0,0 +1,110 @@
|
||||
---
|
||||
parent: More info
|
||||
nav_order: 500
|
||||
description: Opt-in, anonymous, no personal info.
|
||||
---
|
||||
|
||||
# Analytics
|
||||
|
||||
Aider can collect anonymous analytics to help
|
||||
improve aider's ability to work with LLMs, edit code and complete user requests.
|
||||
|
||||
## Opt-in, anonymous, no personal info
|
||||
|
||||
Analytics are only collected if you agree and opt-in.
|
||||
Aider respects your privacy and never collects your code, chat messages, keys or
|
||||
personal info.
|
||||
|
||||
Aider collects information on:
|
||||
|
||||
- which LLMs are used and with how many tokens,
|
||||
- which of aider's edit formats are used,
|
||||
- how often features and commands are used,
|
||||
- information about exceptions and errors,
|
||||
- etc
|
||||
|
||||
These analytics are associated with an anonymous,
|
||||
randomly generated UUID4 user identifier.
|
||||
|
||||
This information helps improve aider by identifying which models, edit formats,
|
||||
features and commands are most used.
|
||||
It also helps uncover bugs that users are experiencing, so that they can be fixed
|
||||
in upcoming releases.
|
||||
|
||||
## Enabling & disabling analytics
|
||||
|
||||
You can opt out of analytics forever by running this command one time:
|
||||
|
||||
```
|
||||
aider --analytics-disable
|
||||
```
|
||||
|
||||
To enable analytics for a single session, you can run aider with `--analytics`.
|
||||
This will *not* have any effect if you have permanently disabled analytics with the previous command.
|
||||
|
||||
The first time, you will need to agree to opt-in.
|
||||
|
||||
```
|
||||
aider --analytics
|
||||
|
||||
Aider respects your privacy and never collects your code, prompts, chats, keys or any personal
|
||||
info.
|
||||
For more info: https://aider.chat/docs/more/analytics.html
|
||||
Allow collection of anonymous analytics to help improve aider? (Y)es/(N)o [Yes]:
|
||||
```
|
||||
|
||||
If you've added `analytics: true` to your
|
||||
[yaml config file](/docs/config/aider_conf.html),
|
||||
you can disable analytics for a single session, you can run:
|
||||
|
||||
```
|
||||
aider --no-analytics
|
||||
```
|
||||
|
||||
## Details about data being collected
|
||||
|
||||
### Sample analytics data
|
||||
|
||||
To get a better sense of what type of data is collected, you can review some
|
||||
[sample analytics logs](https://github.com/aider-ai/aider/blob/main/aider/website/assets/sample-analytics.jsonl).
|
||||
These are the last 1,000 analytics events from the author's
|
||||
personal use of aider, updated regularly.
|
||||
|
||||
|
||||
### Analytics code
|
||||
|
||||
Since aider is open source, all the places where aider collects analytics
|
||||
are visible in the source code.
|
||||
They can be viewed using
|
||||
[GitHub search](https://github.com/search?q=repo%3Aaider-ai%2Faider+%22.event%28%22&type=code).
|
||||
|
||||
|
||||
### Logging and inspecting analytics
|
||||
|
||||
You can get a full log of the analytics that aider is collecting,
|
||||
in case you would like to audit or inspect this data.
|
||||
|
||||
```
|
||||
aider --analytics-log filename.jsonl
|
||||
```
|
||||
|
||||
If you want to just log analytics without reporting them, you can do:
|
||||
|
||||
```
|
||||
aider --analytics-log filename.jsonl --no-analytics
|
||||
```
|
||||
|
||||
|
||||
## Reporting issues
|
||||
|
||||
If you have concerns about any of the analytics that aider is collecting
|
||||
or our data practices
|
||||
please contact us by opening a
|
||||
[GitHub Issue](https://github.com/aider-ai/aider/issues).
|
||||
|
||||
## Privacy policy
|
||||
|
||||
Please see aider's
|
||||
[privacy policy](/docs/legal/privacy.html)
|
||||
for more details.
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user