Compare commits

...

305 Commits

Author SHA1 Message Date
qq815776412
754425670e feat(settings-server): upgrade docker node version to 24.0.2 & upgrade nestjs version to 11.1.1 2025-05-19 21:29:50 +08:00
eball
d8a69a146c otel: bump the go auto-instrumentation image version (#1328)
otel: change the go auto-instrumentation image version
2025-05-19 19:30:36 +08:00
eball
7c134bbb1d authelia: replace redis client pool of session provider (#1323)
* authelia: replace redis client pool of session provider

* Update auth_backend_deploy.yaml

* Update auth_backend_deploy.yaml

* feat: add instrumentation to system-server

* Update systemserver_deploy.yaml
2025-05-17 01:20:19 +08:00
aby913
39dbad4ec9 backup-server: queue optimization, backup and restore process adjust (#1326)
backup-server: queue optimization, backup and restore process adjustments
2025-05-16 23:57:26 +08:00
eball
6c1539d65b otel: add arm64 version ubuntu nginx (#1324)
* otel: nginx auto instrumentation config reload bug fix

* otel: add arm64 version ubuntu nginx

* fix: change image tag
2025-05-16 21:00:41 +08:00
hysyeah
a3038f1edb app-service: improve api performance by use k8s informer (#1322) 2025-05-16 00:19:35 +08:00
huaiyuan
a2c7b16382 desktop: improve data refresh logic by socket after network reconnection (#1321)
fix(desktop): improve data refresh logic by socket after network reconnection
2025-05-16 00:19:09 +08:00
huaiyuan
ac598f66fc studio: show installation status in header bar (#1319)
fix(studio): show installation status in header bar
2025-05-16 00:18:18 +08:00
dkeven
6a8cb38940 fix(chart): remove redundant format symbol in template (#1317) 2025-05-15 21:23:29 +08:00
eball
1c1e7dfdf4 otel: nginx instrumentation arm64 version build bug (#1315)
* otel: nginx auto instrumentation config reload bug fix

* otel: nginx instrumentation arm64 version build bug
2025-05-15 21:22:56 +08:00
aby913
21199571ca backup-server: improve url check for snapshots retrieval and restore … (#1316)
backup-server: improve url check for snapshots retrieval and restore interface
2025-05-15 01:47:57 +08:00
dkeven
f5da7693a9 feat(installer): get rid of redundant subcommand and scripts; collect dmesg logs (#1314) 2025-05-14 17:48:26 +08:00
Peng Peng
668fb373bc feat: Let notification server can get users information (#1313) 2025-05-14 17:47:10 +08:00
eball
99a20ca23f otel: nginx auto instrumentation config reload bug fix (#1312) 2025-05-13 00:31:22 +08:00
wiy
07478c96d6 fix(settings): the problem of failure to create sub-account (#1311) 2025-05-13 00:30:52 +08:00
hysyeah
6d6f5c248c bfl: fix sub user delete issue (#1310) 2025-05-12 20:27:36 +08:00
simon
8f3507fd86 knowledge&download: fix twitter download failure & update larepass download (#1308)
knowledge
2025-05-11 10:53:21 +08:00
aby913
108c1392e3 backup-server: restore bug fix, sdk supports backup from file list (#1307)
fix: restore bug fix, sdk supports backup from file list
2025-05-10 00:42:32 +08:00
hysyeah
5cd37a477d app-service: fix pull image progress (#1306) 2025-05-10 00:41:59 +08:00
wiy
b137f96517 settings & files: update settings mirror manager & backup, files support backup (#1304)
feat: update settings support mirror manager
feat: update files support backup
feat: update settings backup
2025-05-10 00:41:10 +08:00
eball
dc4d5666d8 olares: fix go instrumentation resource limit typo (#1302)
* olares: fix go instrumentation resource limit typo

* fix: change to resourceRequirements

* fix: upgrade base image
2025-05-10 00:40:46 +08:00
dkeven
b3cb83de9f olaresd: manage registries and images in containerd (#1303)
* olaresd: manage registries and images in containerd

* feat: supports backing up from a list file

---------

Co-authored-by: aby913 <aby913@163.com>
2025-05-09 22:21:23 +08:00
aby913
862cfc4625 backup-server: fix external binding, improve message pushing (#1301) 2025-05-08 23:53:39 +08:00
eball
fa5ca7432c olares: add otel instrumentation image to manifest (#1300)
* olares: add otel instrumentation image to manifest

* fix: add autoinstrumentation-apache-httpd arm64 image

* fix: add go instrumentation resource limit

* fix: change instrumentation protocol

* fix: add add sampler ratio env
2025-05-08 23:53:12 +08:00
hysyeah
427bff8b45 ks,node_exporter,installer: add some metrics (#1299) 2025-05-08 23:52:56 +08:00
aby913
b8a3c66003 backup-server: check disk free space, api optimization (#1298)
backup-server: check disk free space
2025-05-08 01:19:37 +08:00
eball
92bf361698 olaresd: steamheadless sunshine mdns proxy (#1297) 2025-05-08 01:19:18 +08:00
wiy
de1cee0000 feat(settings): Encrypted transmission of login password (#1296) 2025-05-08 01:18:56 +08:00
eball
cac1978874 olares: add otel instrumentations (#1295)
* olares: add otel instrumentations

* fix: duplicate container name

* fix: move instrumentation before bfl installation

* feat: change openresty base image to ubuntu

---------

Co-authored-by: liuyu <liuy102@gmail.com>
2025-05-08 01:18:24 +08:00
aby913
1083b417b1 backup-server: support external directory (#1294) 2025-05-06 23:50:26 +08:00
dkeven
d9824a7deb feat: upgrade hami and use original libvgpu.so (#1293) 2025-05-06 23:50:02 +08:00
hysyeah
0aa59ab731 feat(login & wizard): Encrypted transmission of login password (#1292) 2025-05-01 22:55:39 +08:00
simon
28edc29240 download&crawler: fix youtube download failure & crawler cache error (#1291)
ytdlp
2025-05-01 01:05:59 +08:00
dkeven
ef77bff611 feat(installer): md5 password 2025-04-30 15:04:26 +08:00
qq815776412
0667481fcf feat:login & wizard Encrypted transmission of login password 2025-04-30 14:40:12 +08:00
lovehunter9
e16ed5ea64 fix: add init container for files-server (#1288) 2025-04-29 23:47:10 +08:00
simon
93d1237a43 fix: change argo and sync run user (#1287)
permission
2025-04-29 20:01:08 +08:00
hysyeah
42ff86e0af studio-server: change cm push url (#1284) 2025-04-29 00:23:49 +08:00
simon
814dce3dec fix: argo archivelog and knowledge feed save bug (#1283)
knowledge v0.12.4
2025-04-28 18:17:20 +08:00
aby913
bfa43257ff backup-server: abnormal restoration state, get space cos stats failed (#1268) 2025-04-26 00:33:19 +08:00
berg
e1c9e9ad20 fix(vault&wise): some known issues (#1281)
* feat: update wise & vault & files new version to v1.3.54

* feat: update 1.3.55

---------

Co-authored-by: qq815776412 <815776412@qq.com>
2025-04-26 00:09:10 +08:00
hysyeah
1b62d2ae31 lldap,bfl,app-service: user event publish;subnet mask minus 1 (#1277) 2025-04-26 00:07:35 +08:00
berg
51f32c993f profile, market: modify default theme configuration (#1276)
fix: modify default theme configuration
2025-04-26 00:07:05 +08:00
huaiyuan
59749c8b7f desktop: fix iframe hide when zooming the window (#1270) 2025-04-26 00:06:10 +08:00
dkeven
23816103c9 fix: correct minVersion in version.hint to follow semver spec (#1269) 2025-04-26 00:05:44 +08:00
0x7fffff92
62489d4ba4 feat: Tailscale for admin user uses tun interface (#1267)
Co-authored-by: 0x7fffff92 <0x7fffff92@example.com>
2025-04-25 10:58:04 +08:00
huaiyuan
e0803fa6e0 studio: create files err in application page (#1266)
fix: create files err in application page
2025-04-25 10:57:39 +08:00
dkeven
366b81cf46 fix: create crd in helm post-install hook (#1263) 2025-04-25 10:56:18 +08:00
lovehunter9
f7b21a42c7 fix: files-server rename and cut/paste of smb bugfix (#1261) 2025-04-24 15:37:23 +08:00
berg
62ad10d8d8 settings: update settings backup function (#1258)
feat: update settings backup function
2025-04-24 13:53:59 +08:00
huaiyuan
d9cef165ac files: notify message when user cancels upload (#1256) 2025-04-24 00:25:01 +08:00
aby913
7e4b82fff6 backup-server: snapshot progress notification blocking (#1255)
backup-server: snapshot progress notification blocking causing status abnormality
2025-04-24 00:24:34 +08:00
aby913
64c92e5103 fix: lldap usergroup sync, backup notify improve (#1253) 2025-04-23 21:45:27 +08:00
hysyeah
0b7da9bf7a fix: add studio server envoy timeout (#1250)
fix: add studio envoy timeout
2025-04-23 21:08:53 +08:00
eball
c1d5c4e98c olaresd: list more wifi access points (#1249)
* olaresd: list more wifi access points

* Update components
2025-04-23 21:05:58 +08:00
yyh
ae95f1e607 ControlHub: fix workloads operation layout (#1248)
fix(controlHub): fix workloads style disorder in small size
2025-04-22 23:51:06 +08:00
aby913
d772842f4b backup-server: add notification, improve api interface (#1246) 2025-04-22 23:50:01 +08:00
simon
8f7584f719 fix: knowledge feed edit and label save bug (#1245)
knowledge
2025-04-22 23:49:16 +08:00
eball
c0f8b391c6 olaresd: support mounting read-only samba share path (#1243) 2025-04-22 23:47:47 +08:00
dkeven
3ff2d30b48 feat(installer): collect more logs (#1240) 2025-04-22 20:55:03 +08:00
huaiyuan
0a8f0c558d files&files-server: add support mount SMB IP (#1238)
files-server: add support mount SMB IP
2025-04-22 20:54:18 +08:00
wiy
d59eb5856e fix: settings frontend add ACL port ui bug (#1237) 2025-04-22 20:53:55 +08:00
aby913
e90df6cd78 backup-server: fix backup to s3, improve api interface (#1235) 2025-04-22 11:10:10 +08:00
eball
04e3fcd71b olaresd: mark as mounted (#1234) 2025-04-21 21:01:48 +08:00
eball
e74726c5ec tapr: replace nxdomain with noerror (#1232) 2025-04-21 21:01:18 +08:00
eball
e6478aa77c otel: run collector as user 1000 (#1231) 2025-04-21 21:00:55 +08:00
berg
bba3083752 market: Update the error message when the user has insufficient resources during app preflight (#1229)
feat: market v0.3.10 release
2025-04-19 01:18:52 +08:00
aby913
5b6973a6ab backup-server: api interface enhancement (#1227) 2025-04-19 01:17:45 +08:00
huaiyuan
99185c4729 studio&controlHub: coding in olares by studio (#1225)
* studio&controlHub: coding in olares by studio

* feat: studio server image tag

---------

Co-authored-by: hys <hysyeah@gmail.com>
2025-04-19 01:16:44 +08:00
eball
bd631167f5 olaresd: allow mounting a subpath of the share point (#1223)
* olaresd: allow mounting a subpath of the share point

* Update components
2025-04-19 01:15:49 +08:00
aby913
8e3ddfb8af backup-server: resolved restoration from space and COS using backupUr… (#1222)
backup-server: resolved restoration from space and COS using backupUrl, enhanced API interface data format
2025-04-17 23:32:27 +08:00
simon
71ccfd34c6 fix(knowledge): recommend install and uninstall error (#1221)
knowledge v0.12.1
2025-04-17 23:31:55 +08:00
eball
54bd129c33 olaresd: list samba share names before mounting (#1218) 2025-04-17 23:30:29 +08:00
hysyeah
c4a88aea86 ks,Installer: node shell add lang env (#1216) 2025-04-16 23:57:20 +08:00
aby913
11aa89687c backup-server: restore params invalid, api response data format (#1215)
backup-server: restore snapshotId invalid, api response data format
2025-04-16 23:56:42 +08:00
simon
ac887e9201 fix(knowledge): redis addr error (#1214)
redis addr
2025-04-16 20:19:40 +08:00
aby913
e8aa4b3521 backup-server: backup loacal path invalid, api response data format (#1213) 2025-04-16 00:44:31 +08:00
simon
6f4a091380 fix(knowledge): argo archivelogs and knowledge service error (#1212)
* mr

* bug fix

* iarchivelogs
2025-04-15 18:06:24 +08:00
eball
939c9671b9 Update check.yaml 2025-04-15 16:05:07 +08:00
eball
a129ea79ca Update daily-lint-check.yaml 2025-04-15 15:51:20 +08:00
eball
ce40d04085 olares: lint errors in values.yaml (#1210)
* olares: lint errors in values.yaml

* remove empty lines

* fix: lint error in appservice_deploy.yaml

* fix: lint error in auth_backend_deploy.yaml

* fix: all lint errors

* fix: lint errors in backup_server.yaml

* fix: lint errors in citus_deployment.yaml

* fix: all lint errors

* fix: all lint errors

---------

Co-authored-by: liuyu <>
2025-04-15 13:18:07 +08:00
aby913
cddc5d1ea9 backup-server: fix backup total size (#1211) 2025-04-15 00:03:36 +08:00
huaiyuan
130bcb2a6a files: update Larepass new version to v1.3.50 (#1208) 2025-04-15 00:01:13 +08:00
Calvin W.
dbb52c5d67 docs: update Olares platform support info (#1207) 2025-04-15 00:00:35 +08:00
eball
c95c9fb9d2 olares: daily lint check all charts files (#1206)
Co-authored-by: liuyu <>
2025-04-14 19:04:11 +08:00
simon
6a686098bd fix(knowledge): db connect error (#1205)
* secret

* secret

* pg_password

* debug

* debug

* secret

* secret add hook

* knowledge
2025-04-14 14:58:12 +08:00
eball
6fb634f3fb olares: add lint check listing changed files scope (#1204)
* olares: add lint check listing changed files scope

* Update appservice_deploy.yaml

* Update check.yaml
2025-04-12 13:19:05 +08:00
simon
c19ee276dc feat: move argo,knowledge and download to os-system (#1198)
* move to os-system

* host path

* test

* debug

* debug

* debug

* debug

* debug

* argo add values

* debug

* debug

* debug

* debug

* remove keyFormat
2025-04-11 20:53:50 +08:00
wiy
76e1981816 fix(settings): network update cloudflare to frp error (#1203) 2025-04-11 00:20:19 +08:00
eball
bc319d8901 tapr: fix corefile updating bug (#1201) 2025-04-11 00:19:16 +08:00
eball
39e4663461 olaresd: add noserverino option to cifs mount (#1199) 2025-04-10 22:10:12 +08:00
eball
4efa2714f0 olares, app-service: fix hami gpu monitoring configuration bug (#1197)
* olares: fix hami gpu monitoring configuration bug

* app-service: underlay namespace labels modified

---------

Co-authored-by: liuyu <>
2025-04-10 20:58:09 +08:00
yyh
7be076b9a6 controlhub/studio: update dialog and fix studio deploy app (#1195)
fix(controlhub/studio): update dialog and fix studio deploy app
2025-04-09 23:19:03 +08:00
aby913
855e634fc5 backup-server: query page, pool with multi users (#1193) 2025-04-09 23:18:05 +08:00
eball
ffce1b6039 olares: hami monitoring api for dashboard (#1192)
* feat: hami monitoring api for dashboard

* fix: values bug

---------

Co-authored-by: liuyu <>
2025-04-09 23:17:38 +08:00
aby913
03fa1f0c88 backup-server: api adjustment, working pool integration (#1191)
backup-server: api adjustment, working pool integration and other improvements
2025-04-08 23:32:01 +08:00
yyh
2a6fed8875 studio: automatically refresh the workloads (#1190)
fix(studio): support automatic refresh of workload
2025-04-08 23:31:32 +08:00
eball
f8554e95dc tapr: ignore deleting the not exists namespace (#1188)
Co-authored-by: liuyu <>
2025-04-08 23:30:33 +08:00
eball
8094e65a2f tapr: add other query type response code (#1186)
fix: add other query type response code

Co-authored-by: liuyu <>
2025-04-08 23:29:51 +08:00
hysyeah
e5e235cc44 app-service: pull image with unpack;del cache dir by call files (#1184)
* app-service: pull image with unpack;del cache dir by call files

* fix: upate image service tag
2025-04-08 11:52:40 +08:00
eball
42f28ba28d olares: mark the market as cluster critical (#1183)
Co-authored-by: liuyu <>
2025-04-07 21:27:48 +08:00
aby913
7243ba8dc0 backup-server: fix bugs in api and worker management (#1179) 2025-04-07 10:53:55 +08:00
salt
013b67acf4 fix: fix cloud drive lock not released when some thread corrupted (#1178)
Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
2025-04-07 10:53:20 +08:00
berg
00ce2f1183 wise: optimized partial copywriting (#1175)
feat: update wise v1.3.47
2025-04-03 18:00:34 +08:00
huaiyuan
41e6ba6ced studio: update version to v0.2.4 (#1172)
* studio: update version to v0.2.4

* fix: app cache,data dir

---------

Co-authored-by: hys <hysyeah@gmail.com>
2025-04-03 17:29:48 +08:00
wiy
bbbd748a63 feat: update files & wise new version to v1.3.46 (#1169) 2025-04-03 11:11:57 +08:00
huaiyuan
2d9f86d30e studio&studio server&app service: fix some bugs (#1167)
* studio,studio-service: bug fix

* studio: fix some bugs

---------

Co-authored-by: hys <hysyeah@gmail.com>
2025-04-03 11:11:04 +08:00
huaiyuan
c3908fbb09 desktop: update the display logic for delete icons in Lanchpad (#1163) 2025-04-03 11:09:06 +08:00
hysyeah
ea00dc1528 studio,studio-server: fix some bug (#1161) 2025-04-02 11:11:42 +08:00
berg
c04e8b508b market, app-service: Conflict Resolution, Dependency Check, and App Store Data Integration (#1159)
* feat: update market and app-service version

* fix: upate image tag

---------

Co-authored-by: hys <hysyeah@gmail.com>
2025-04-02 11:10:54 +08:00
eball
a1d9e179f4 authelia, notifications: send login msg to notification server from authelia (#1157)
Co-authored-by: liuyu <>
2025-04-01 23:03:29 +08:00
aby913
af26af85ba feat: supporting folder backup and restoration (#1155)
feat: backup-server refactoring
2025-04-01 21:17:39 +08:00
dkeven
452d7260d0 fix(installer): add MARKET_PROVIDER to global envs (#1151) 2025-04-01 21:16:13 +08:00
huaiyuan
936e4a3e36 devbox&devbox server&app server: Initialize Studio (#1143)
* devbox: refactor devbox

* feat: devbox nginx

* feat: upate devbox server tag

---------

Co-authored-by: hys <hysyeah@gmail.com>
2025-04-01 00:24:24 +08:00
wiy
832d9a3f28 feat(files-server & files & settings): update files frontend & files server version (#1149)
* fix: files external move folder bug

* fix: display google drive root error

* fix: settings frontend use default language error

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
2025-04-01 00:06:07 +08:00
berg
932cc112b0 market: modify cs app to shared app (#1147)
feat: modify cs app to shared app
2025-04-01 00:04:54 +08:00
eball
2cc485b18d authelia: send user login related message to notification server via nats (#1140)
* feat: send user login related message to notification server via nats

* fix: nats configurations

---------

Co-authored-by: liuyu <>
2025-03-31 21:40:10 +08:00
eball
2a2a3cf695 feat: move notifications server to os-system (#1139)
* feat: move notifications server to os-system

* fix: modified nats request refs app name

* fix: bump notifications-api version to v1.12.0

* fix: remove notification api from system frontend

---------

Co-authored-by: liuyu <>
2025-03-31 16:44:52 +08:00
hysyeah
8e5736dcbc ks: fix a bug and add some log (#1138) 2025-03-29 00:49:38 +08:00
hysyeah
b910e15ed2 market,app-service: merge cs chart to one (#1137)
feat: merge cs chart to one
2025-03-29 00:49:02 +08:00
eball
64e211f090 l4-bfl-proxy, tapr, authelia: fix local domain solution bugs (#1134)
Co-authored-by: liuyu <>
2025-03-28 21:29:40 +08:00
aby913
a5a1956898 fix(installer): add cli command for querying supported backup regions (#1135)
* fix(installer): add cli command for querying supported backup regions

* fix: files-server jsonify message for status 500 (#1129)

fix: files-server jsonify message for 500

---------

Co-authored-by: lovehunter9 <39935488+lovehunter9@users.noreply.github.com>
2025-03-28 21:27:34 +08:00
hysyeah
10ecba5e74 installer,studio: feat move studio back to user space (#1131)
* feat: move studio back to user space

* feat: update permissions
2025-03-28 20:28:56 +08:00
lovehunter9
9a1b5a8e75 fix: files-server jsonify message for status 500 (#1129)
fix: files-server jsonify message for 500
2025-03-28 20:04:07 +08:00
dkeven
a4b46b9ec7 fix(installer): pass the correct coredns service ip (#1128)
* fix(installer): pass the correct coredns service ip

* fix: add privileges of configmap to component sys-event

* fix: update reverse proxy image

---------

Co-authored-by: liuyu <>
2025-03-28 16:02:14 +08:00
hysyeah
66585996b2 app-service: fix nil tailscale in update application (#1127)
Co-authored-by: eball <liuy102@hotmail.com>
2025-03-28 00:11:34 +08:00
dkeven
0c7b1d9d27 feat: support custom domain in both cloudflare and FRP tunnel (#1126)
* feat(bfl): support custom domain in both cloudflare and FRP tunnel

* feat(settings): update settings config third domain

---------

Co-authored-by: qq815776412 <815776412@qq.com>
2025-03-27 23:17:28 +08:00
eball
67dd2f7e2e bfl, authelia, tapr: new solution for local domain (#1124)
* bfl, authelia, tapr: new solution for local domain

* feat: bump the components version

* feat: ts-routes env

* feat: adjust MagicDNS configuration

* feat(installer): inject coredns service ip to global envs

* feat: add terminus global envs for tailscale

* fix: tailscale envs

---------

Co-authored-by: liuyu <>
Co-authored-by: hys <hysyeah@gmail.com>
Co-authored-by: 0x7fffff92 <0x7fffff92@example.com>
Co-authored-by: dkeven <dkvvven@gmail.com>
2025-03-27 23:17:02 +08:00
simon
99e23b6411 feat(knowledge): update knowledge new version to v0.1.68 (#1125)
knowledge v0.1.68
2025-03-27 21:49:53 +08:00
salt
95b1b49dd1 fix: add metadata when return to frontend (#1122)
Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
2025-03-27 11:13:15 +08:00
salt
88021287b3 fix: fix latest reconstruct awss3 error, mainly about repeat file or … (#1120)
fix: fix latest reconstruct awss3 error, mainly about repeat file or folder and delete error

Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
2025-03-27 11:12:15 +08:00
wiy
4f0587ea6f feat(files&wise&files-server): update files & wise new version to v1.3.44 (#1119)
* feat: files add awss3 features support which are left in the last version

* feat: update files support awss3

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
2025-03-27 01:11:43 +08:00
wiy
8c77fa8e0c feat(settings): update settings support vpn config (#1117)
* feat: update settings support vpn config

* feat: tailscale subnet

---------

Co-authored-by: hys <hysyeah@gmail.com>
2025-03-27 01:10:44 +08:00
eball
4f64f7b2af tapr: persist kvrocks namespace config (#1116)
fix: persist kvrocks namespace config

Co-authored-by: liuyu <>
2025-03-27 01:09:58 +08:00
hysyeah
6878f4f4e6 app-service: fix upgrade values (#1114) 2025-03-26 21:26:22 +08:00
simon
688a10b637 knowledge: update knowledge to v0.1.67 (#1112)
knowledge v0.1.67
2025-03-26 21:25:33 +08:00
eball
15a9540879 authelia: fix cached redis session provider gc api (#1110)
Co-authored-by: liuyu <>
2025-03-26 21:24:58 +08:00
huaiyuan
cc9ae24140 desktop&login: add intent to support open file in files (#1107) 2025-03-26 01:03:35 +08:00
eball
4981f3c65a olares: uploading last chunk of a file got 504 timeout response (#1105)
Co-authored-by: liuyu <>
2025-03-26 01:02:58 +08:00
dkeven
2e3bbf991f fix(gpu): update libvgpu.so with more tolerant GLIBC requirements (#1104) 2025-03-25 15:45:16 +08:00
eball
708bd25a12 olaresd: change the command collect-logs to olares-cli (#1102) 2025-03-25 10:57:31 +08:00
salt
0139d96a25 feat: basically compelete reconstruct s3 (#1103)
Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
2025-03-25 10:57:14 +08:00
wiy
6e8d04bf4f feat(Files&Vault): update files & vault to new version to v1.3.43 (#1100)
* feat: update files & vault to new version to v1.3.43

* files-server add awss3 support (with known bugs), permission relative and md5 check of uploader

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
2025-03-22 01:40:53 +08:00
hysyeah
08293c71bc app-service: add download cdn url to helm values (#1098) 2025-03-22 01:39:22 +08:00
eball
ce89430594 olares: fix opentelemetry instrumentation config (#1097)
* olares: fix opentelemetry instrumentation config

* fix: comment out auto instrumentation temporarily

* fix: jaeger collector config

---------

Co-authored-by: liuyu <>
2025-03-22 01:38:59 +08:00
hysyeah
358cd71049 app-service: set upgrade job ttl to 30 days (#1095) 2025-03-21 14:59:56 +08:00
hysyeah
7cca14e288 ks: add pod metric route (#1094) 2025-03-20 17:26:00 +08:00
dkeven
f17a787624 feat(installer): add commands to get logs & start/stop Olares; optimize shutdown performance (#1092) 2025-03-20 01:17:26 +08:00
hysyeah
ef3c7c82cc lldap: change lldap db to postgresql (#1091)
* change lldap db to postgres

* fix: remove some image
2025-03-19 00:24:38 +08:00
eball
c9d25d1f74 olares: add system upgrading apps checking (#1090)
olares: add system upgrading files server checking

Co-authored-by: liuyu <>
2025-03-19 00:24:10 +08:00
dkeven
1ab027b9da feat(frp): add error logs (#1088) 2025-03-18 01:26:41 +08:00
eball
f3b481fbf2 olares: increase envoy idle timeout for files-frontend (#1087)
Co-authored-by: liuyu <>
2025-03-17 21:43:38 +08:00
lovehunter9
f1b8fa5aea feat: files permission relative (#1080) 2025-03-15 00:15:02 +08:00
berg
966ac1d605 wise, file: fixed the issue with resumablejs.js retrying to upload from 0 Merge duplicate upload tasks and wise filter optimize (#1083)
feat: update files and wise version
2025-03-14 23:06:22 +08:00
simon
9331be628b knowledge&download: update knowledge to v0.1.66, download-spider to v0.0.20 (#1082)
knowledge v0.1.66
2025-03-14 23:05:42 +08:00
hysyeah
ab6494049f app-service: revert hostpath chown 1000;remove handle model code (#1079) 2025-03-14 20:48:48 +08:00
wiy
4464dcf2b1 feat(settings): add entrance endpoint url & fix WebSocket keep-alive (#1075)
feat(settings): add entrance endpoint url & fix WebSocket keep-alive error
2025-03-14 00:04:39 +08:00
eball
e00a6ba27a l4-bfl-proxy: optimize l4 proxy gateway performance (#1073)
Co-authored-by: liuyu <>
2025-03-14 00:03:52 +08:00
eball
3a5b53fa57 olares: fix the opentelemetry annotations configuration bugs (#1072)
* olares: fix the opentelemetry annotations configuration bug

* fix: wrong annotation configurations

* fix: wrong annotation configurations

---------

Co-authored-by: liuyu <>
2025-03-14 00:02:56 +08:00
huaiyuan
e0a670628c desktop: request data when socket err or network offline (#1070) 2025-03-12 23:27:23 +08:00
aby913
7ced9702df feat(installer): support data backup, restore in olares-cli (#1069) 2025-03-12 23:26:58 +08:00
eball
09cb6075ad olares: use the pod locahost address as the infisical server address to the infisical sidecar (#1068)
Co-authored-by: liuyu <>
2025-03-12 23:26:19 +08:00
hysyeah
d8ba35adbe tapr,bfl:add tapr-image-role secrets permission;fix create user cpu check (#1066) 2025-03-12 21:24:01 +08:00
eball
da469f4f27 tapr: add missing fields of db table organizations in Infisical sidecar (#1064)
Co-authored-by: liuyu <>
2025-03-12 21:04:15 +08:00
hysyeah
d7265418cd fix: change ks image tag (#1061) 2025-03-12 20:14:06 +08:00
salt
0f12d4e5df fix: optimize google,dropbox direct upload (#1060)
Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
2025-03-12 20:12:32 +08:00
wiy
f3a76a229f feat(files): update files support google drive & dropbox (#1057) 2025-03-12 15:40:49 +08:00
dkeven
6bc4ec410a fix: add the missing kubernetes image (#1056) 2025-03-12 15:38:38 +08:00
dkeven
cad586985f feat(installer): support swap and zram configurations (#1055) 2025-03-12 14:45:51 +08:00
berg
6f1b1c667a market: reconnect socket and reinitialize data on app return (#1053)
feat: market release v0.3.6 version
2025-03-12 00:03:19 +08:00
lovehunter9
d334a537d1 style: files-server project structure reconstruction (#1051) 2025-03-12 00:02:22 +08:00
hysyeah
744edb7969 fix: add node shell image to pre download (#1050) 2025-03-12 00:01:08 +08:00
eball
3e506527a2 tapr: move infisical secret service to os-system as a singleton instance (#1047)
* tapr: move infisical secret service to os-system as a singleton instance

* fix: middleware configuration

* fix: cluster role bug

---------

Co-authored-by: liuyu <>
2025-03-11 00:28:56 +08:00
hysyeah
58a9264fab app-service: change hostpath with type DirectoryOrCreate owner to 1000 by inject init container (#1046) 2025-03-10 22:19:55 +08:00
yyh
a36ecdddc9 control-hub: fix terminal route path conflict (#1045)
fix(control-hub): fix terminal route path conflict
2025-03-10 21:06:21 +08:00
eball
9b5aa0e550 olares: add opentelemery to cluster to trace the services of cluster (#1042)
* feat: add opentelemetry operator to cluster

* feat: add instrumentation injecting

* fix: add webhook test pod

* fix: update helm hook to install webhook priority

* fix: update priority

* fix: post install otel webhook

* fix: collector bug & post install to wait operator running

* fix: alpine 3.3 has not arm64 version

---------

Co-authored-by: liuyu <>
2025-03-09 21:29:15 +08:00
hysyeah
4567cc4cfe olares: fix special leading char cause helm render error (#1040) 2025-03-07 00:34:37 +08:00
berg
3b49853bd4 wise, knowledge: add reading progress function and fix some bugs (#1039)
feat: update wise and knowledge version
2025-03-07 00:34:11 +08:00
huaiyuan
ad37446fc1 desktop: launch display different icons on different devices (#1037) 2025-03-06 15:49:54 +08:00
dkeven
01644ec8b3 feat: use HAMi with nvshare as GPU plugin (#1033) 2025-03-06 15:47:53 +08:00
wiy
492e56becb files: update files new version to 1.3.39 (#1029)
* fix: seafile remove recv file log for uploading more stable

* fix: upload retry error & sync upload refresh files

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
2025-03-05 23:57:40 +08:00
yyh
0e9d57051f feat(control-hub & ks): add node terminal (#1028)
* feat(control-hub): add node terminal

* feat: handle node default shell to bash

---------

Co-authored-by: hys <hysyeah@gmail.com>
2025-03-05 23:57:18 +08:00
huaiyuan
a90ab98631 fix: update @bytetrade/core to 0.2.53 (#1026) 2025-03-05 23:56:08 +08:00
eball
d1232f37c3 fix: increase ingress client body buffer size (#1023) 2025-03-05 23:54:41 +08:00
dkeven
9e9267b4b0 fix(bfl): fetch current user object before every configure operation (#1021) 2025-03-05 23:54:02 +08:00
berg
55bcb45ab2 wise, file: update files & wise new version to 1.3.38 (#1019)
* fix: files changed to feed drive_server 0.0.50 and cache using newest version, uploader offset judging changed for SMB 499 and improve uploading speed

* feat: update files & wise new version

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
Co-authored-by: qq815776412 <815776412@qq.com>
2025-03-04 23:59:54 +08:00
dkeven
710491d8ed feat: upgrade k8s to 1.32 (#1014) 2025-03-04 20:48:09 +08:00
huaiyuan
323dc52e59 login&desktop: open a new tab when on mobile and tablet devices (#1015)
login&desktop: open the app in a new tab when on mobile and tablet devices
2025-03-04 00:05:53 +08:00
dkeven
c02910400e feat(bfl): add watcher to apply reverse proxy (#1013) 2025-03-04 00:05:17 +08:00
eball
0e25eb1d8b olaresd: remove smb mounting blocksize option to use the default value (#1011) 2025-03-04 00:04:29 +08:00
hysyeah
ee1e2abed0 app-service: fix envoy outbound port (#1010) 2025-03-04 00:04:06 +08:00
aby913
ea24c1a33c ci: build restic (#1001) 2025-03-03 21:23:02 +08:00
simon
c993d936be knowledge&download: update knowledge to v0.1.64, download-spider to v0.0.19 (#1007)
knowledge v0.1.64
2025-03-03 12:07:52 +08:00
salt
7ba5b5628a feat: add id-route for file info, fix file size limit when direct upload (#1005)
Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
2025-03-03 11:07:13 +08:00
huaiyuan
94181ab9db login&desktop: update desktop dock logic and optimize mobile device (#1002)
login&desktop: update update desktop dock logic and optimize mobile device
2025-02-28 23:55:11 +08:00
hysyeah
9f2f390b5a app-service: custom allowed outbound port;tcp udp port (#997)
* app-service: custom allowed outbound port;tcp udp port

* fix: add idle timeout to original_dst cluster

---------

Co-authored-by: liuyu <>
2025-02-27 23:59:46 +08:00
Calvin W.
c514ecec20 docs: fix bad link in readme (#996) 2025-02-27 00:07:51 +08:00
hysyeah
1fcbd0b790 app-service: fix app installation can not be canceled after reboot (#993) 2025-02-26 00:33:31 +08:00
salt
5bb3143f57 feat: cloud drive async upload rename (#992)
Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
2025-02-26 00:33:05 +08:00
eball
b368735e27 bfl-ingress: increase keepalive requests of ingress (#990) 2025-02-26 00:31:57 +08:00
huaiyuan
e7792c272e files&files server: add support for google drive and dropbox (#989)
* feat: files add support for google drive and dropbox

* fix(files): update google drive and dropbox

* limit version for appdata-backend

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
2025-02-25 13:13:50 +08:00
huaiyuan
f622bec74f desktop: update highlight txt in search (#988) 2025-02-24 23:33:54 +08:00
hysyeah
cc3d8faabf tapr: fix create stream return nil value (#985) 2025-02-24 23:32:34 +08:00
salt
2ec8abe45c fix: fix async upload from terminus to dropbox file size error (#984)
Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
2025-02-24 23:32:09 +08:00
salt
97e67e4e28 feat: optimization search3 (#981)
* feat: optimization search3

* feat: desktop-server change for search3 merge result

---------

Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
2025-02-24 18:50:33 +08:00
simon
ce5120008d knowledge: update knowledge to v0.1.63 (#980)
knowledge v0.1.63
2025-02-21 23:56:20 +08:00
yyh
80003178bf fix(desktop): disable PWA in safari on the desktop (#979) 2025-02-21 23:55:53 +08:00
hysyeah
946598e731 tapr, system-server: fix auth token validate (#977) 2025-02-21 23:54:52 +08:00
berg
e311ab4f72 market: allow paused apps to update (#975)
feat: update market to v0.3.5
2025-02-21 23:53:46 +08:00
simon
678645a243 download&download: update knowledge to v0.1.62, yt-dlp to v0.0.20 (#973)
knowledge update
2025-02-20 23:28:07 +08:00
hysyeah
61344115f2 app-service,kubesphere: get best cnd server in upgrade job; change kubectl image tag (#972)
* app-service,kubesphere: get best cnd server in upgrade job; change kubectl image tag

* Update images

* Update appservice_deploy.yaml

---------

Co-authored-by: eball <liuy102@hotmail.com>
2025-02-20 23:27:35 +08:00
eball
c227e9ba21 olaresd: optimize smb mount options & add api for oic (#969) 2025-02-20 17:11:52 +08:00
simon
e98c276bf0 download&backend server: update download-spider to v0.0.17, backend to v0.0.26 (#967)
add twitter ,zhihu extract
2025-02-20 00:39:49 +08:00
huaiyuan
4d4f8999d0 larepass&files&files server: update LarePass version to v1.3.31 (#965)
* fix: sync recursive pasting with eacape

* fix(files): block slashes when creating/renaming and update notify msg

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
2025-02-20 00:39:18 +08:00
hysyeah
e1ad84bca5 kubesphere, bfl, authelia, app-service, system-server, installer: ks remove unused code;support lldap auth (#959)
* feat: ks remove unused code;support lldap auth

* fix: update monitoring server

* fix: update cli version
2025-02-20 00:38:36 +08:00
huaiyuan
9587345155 larepass&files&files server: update LarePass version to v1.3.30 (#964)
* fix: pasting to sync with special characters

* fix(files): prompt message when a backslash appears in sync

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
2025-02-18 23:52:10 +08:00
eball
14400a559e files: make the files server running as root (#960) 2025-02-18 23:50:27 +08:00
huaiyuan
65211ba044 larePass&files&files server: update LarePass version to v1.3.29 (#957)
* fix: deal with special characters for dirve/cache/sync, fix uploading process lost problem at restarting for uploader

* fix(files): fix bug of special character error in file name

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
2025-02-18 00:18:21 +08:00
huaiyuan
c4516d19c7 login: display login content on Safari browser (#955)
fix: display login content on Safari browser
2025-02-17 23:51:35 +08:00
yyh
4064ccf393 fix(desktop): fix: fix resource cache in safari browser and some ui bug (#954) 2025-02-17 23:51:01 +08:00
berg
74377bd655 settings: hide user email entry (#952)
feat: update settings v0.2.11
2025-02-17 22:19:41 +08:00
eball
ac33371b57 bfl: increase l4 proxy nginx worker process number to half of cpu cores (#949)
bfl: increase nginx worker process to half of cpu cores
2025-02-17 22:04:26 +08:00
salt
4617d8828a feat: fix knowen dropbox, googledrive problem (#948)
feat:fix knowen dropbox, googledrive problem

Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
2025-02-17 10:55:37 +08:00
hysyeah
c117ea6c8f app-service: change user space network policy for ipblock (#946)
fix: change user space network policy for ipblock
2025-02-13 23:42:41 +08:00
hysyeah
c290145ea8 app-service: continue to resume op after restart; envoy inbound tcp proxy (#943)
* app-service: continue to resume op after restart; envoy inbound tcp proxy

* ci: fix upload script bug

---------

Co-authored-by: liuyu <>
2025-02-12 22:51:28 +08:00
dkeven
e56978b164 fix(installer): restart coredns when change ip, raise cri timeout (#941) 2025-02-12 01:12:09 +08:00
eball
afc83d5c85 tapr: add node affinity to citus and kvrocks (#939)
Co-authored-by: liuyu <>
2025-02-11 13:44:33 +08:00
eball
9f324692bd olares: upload the original file with md5 as a backup (#938)
* olares: upload original file with md5 as a backup

* olares: upload original file with md5 as a backup

---------

Co-authored-by: liuyu <>
2025-02-10 20:28:41 +08:00
liuyu
bb471ba463 suspend daily build 2025-01-31 09:59:41 +08:00
eball
b08174353a olares: remove some debug code (#935)
fix: remove some debug codes

Co-authored-by: liuyu <>
2025-01-24 13:41:05 +08:00
eball
60bedc6c46 app-service: remove app cache path on the hosts directly (#936)
* app-service: remove app cache path on the hosts directly

* Update appservice_deploy.yaml
2025-01-24 11:05:07 +08:00
huaiyuan
98984ead44 files: delete notify id in notifyHide (#932)
fix: delete notify id in notifyHide
2025-01-23 23:01:13 +08:00
eball
a578148d5e olaresd: allow mounting an external device to ai path (#929)
olaresd: allow mounting a external device to ai path
2025-01-23 20:23:34 +08:00
eball
35c2072d9c app-service: inject nvshare environment duplicately (#927) 2025-01-23 20:23:01 +08:00
huaiyuan
9b57981490 files&files server: update LarePass version to v1.3.25 (#925)
* uploader v1.0.9 to make final stage of uploading big file invisiable; increase files nginx worker to auto and increase timeout of files nginx and envoy and seafile nginx

* files: notify each operation when pasting

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
2025-01-23 20:21:52 +08:00
aby913
45d32ef568 fix(installer): prompt for the installation location and setup host ip as nat gateway ip for oic (#923) 2025-01-23 20:11:47 +08:00
huaiyuan
01d259870a files&files server: updage LarePass version to v1.3.24 (#919)
* fix: files nginx increase worker and timeout, and pasting temp file invisiable

* fix: fix create new folder in sync and update nginx timeout

* fix: increase the ingress read timeout

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
Co-authored-by: liuyu <>
2025-01-22 21:33:32 +08:00
0x7fffff92
e94c3acf25 fix: let tailscale follow headscale restart (#917)
Co-authored-by: 0x7fffff92 <0x7fffff92@example.com>
2025-01-22 16:58:39 +08:00
aby913
d95c577789 fix(installer): wsl hangs on update (#916) 2025-01-22 15:33:44 +08:00
simon
f72e4b903c knowledge: update version to v0.1.61 (#908)
knowledge
2025-01-22 14:03:16 +08:00
aby913
2c57b6f35a ci: build wsl-msi script fix (#907)
ci: build script fix
2025-01-21 23:31:24 +08:00
yyh
00c44e2797 fix(control-hub): fix pod status sync after delete replicas (#912) 2025-01-21 22:22:52 +08:00
huaiyuan
9fa30c9034 files&files server: disable nats and expand upload size limit to 100G (#909)
* fix: disable nats and expand upload size limit to 100G

* fix: files disable socket and expand upload size limit to 100G

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
2025-01-21 22:22:39 +08:00
aby913
764547abda ci: add build-wsl-package workflow (#901) 2025-01-21 20:55:07 +08:00
huaiyuan
f08b03863d files&files server: update larepass version to v1.3.20 (#905)
* fix: files immediately send events for remove/rename and folder create

* fix: fix files uplaodModal count err and filter md5

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
2025-01-21 19:48:37 +08:00
eball
1a2f45760a olaresd: mounting usb device compatibles with ata bridge (#903) 2025-01-21 19:06:23 +08:00
aby913
ab596896c7 ci: upload wsl2 installation package (#895)
ci: upload wsl-install-msi
2025-01-21 01:33:46 +08:00
simon
4e13cc2f9e download: update yt-dlp download version to v0.0.19 (#900)
yt-dlp
2025-01-21 01:33:15 +08:00
huaiyuan
d17514e94a files&settings&market&files server: update version larepass to v1.3.19 (#898)
fix: files-server memory explode bug by deleting md5 and buffering io.Copy
2025-01-20 23:42:24 +08:00
eball
dcaa0e7755 installer: install cifs-utils for mounting smb path (#893)
fix: install cifs-utils for mounting smb path

Co-authored-by: liuyu <>
2025-01-20 17:08:51 +08:00
hysyeah
1c9dfc702f app-service: support network visit from windows app (#891) 2025-01-20 00:38:15 +08:00
huaiyuan
1977c12c16 files, appdata-gateway,uploader: smb support, md5 function, cache preview and fix a pvc problem (#889)
* files, appdata-gateway and uploader: smb support, md5 function, cache preview and fix a pvc problem

* files, appdata-gateway and uploader: smb support, md5 function, cache preview and fix a pvc problem

* feat: mount smb share file & connect wifi via ble

* Merge branch 'smb_md5_history' of github.com:beclab/olares into smb_md5_history

# Conflicts:
#	apps/files/config/cluster/deploy/files_deploy.yaml

* files: external add smb server and files can view MD5

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
Co-authored-by: hysyeah <hysyeah@gmail.com>
Co-authored-by: liuyu <>
2025-01-18 00:54:41 +08:00
dkeven
4c69c7df7f fix(installer): modified some commands to compatible running In the container (#888) 2025-01-17 22:42:22 +08:00
hysyeah
bd591d106f app-serivce: inject nvshare-debug env (#886) 2025-01-17 21:35:26 +08:00
dkeven
d5ca9826e8 fix(installer): issues in wsl downloading/ssh sudo/containerd install (#884) 2025-01-17 21:30:53 +08:00
Calvin W.
eb1f35f934 docs: update the latest arch diagram (#883) 2025-01-17 19:10:53 +08:00
Calvin W
3007354c76 update the latest version 2025-01-17 13:39:07 +08:00
Calvin W
62a3152574 docs: update the latest arch diagram 2025-01-16 19:21:50 +08:00
eball
f785c89999 olares,bfl: update critical pods priority class (#879)
olares: update critical pods priority class

Co-authored-by: liuyu <>
2025-01-16 16:54:45 +08:00
berg
b502dfc1ef settings, dashboard: restore settings app entrance status notification and dashboard websocket (#876)
* fix: fix dashboard and settings websocket and update application entrance status

* fix: move dashboard ws nignx proxy
2025-01-16 00:16:01 +08:00
eball
baae5a5632 bfl: fix headscale acl api path parameters (#874) 2025-01-16 00:15:31 +08:00
dkeven
5c9a6dfa87 fix(installer): dont wipe juicefs when uninstalling worker (#873) 2025-01-15 21:34:30 +08:00
Calvin W.
86fcaf16c0 docs: remove comparison table and update arch diagram in readme (#871)
* docs: remove comparison table and update arch diagram

* Apply suggestions from code review

Co-authored-by: Yajing <110797546+fnalways@users.noreply.github.com>

---------

Co-authored-by: Yajing <110797546+fnalways@users.noreply.github.com>
2025-01-15 21:33:32 +08:00
berg
3225626ad9 bfl, settings, app-service: add ports and tailscale acl (#870)
* app-service,bfl: app ports acl api

* feat: update settings frontend and settings server

---------

Co-authored-by: hys <hysyeah@gmail.com>
2025-01-15 00:18:18 +08:00
dkeven
7ce7f0febe feat: add node to a cluster (#868) 2025-01-14 21:52:28 +08:00
dkeven
0eebaf7ddf feat(installer): add env var to explicitly specify public access (#866) 2025-01-14 21:22:02 +08:00
0x7fffff92
5947cfe42f fix(headscale): use postgres instead of sqlite for headscale rollingupdate (#865)
fix: use postgres instead of sqlite for headscale rollingupdate

Co-authored-by: 0x7fffff92 <0x7fffff92@example.com>
2025-01-14 21:21:41 +08:00
berg
e0050837ad wise: fix some bugs and update the version to be consistent with olares 1.11 (#858)
feat: update wise version
2025-01-13 22:22:58 +08:00
aby913
61eeb2094f fix(installer): windows user home path (#862) 2025-01-13 22:08:00 +08:00
dkeven
f9546d61ac fix(installer): fix multiple network-related bugs (#859) 2025-01-13 19:47:36 +08:00
dkeven
b044d6ece1 feat(installer): check systemd-resolved and config resolv.conf (#856) 2025-01-10 22:08:49 +08:00
hysyeah
ec416d0206 app-service: delete cache dir when cancel installation;set nvshare env (#855) 2025-01-10 21:18:51 +08:00
dkeven
1c114a4d80 feat(installer): check the validity of resolv.conf before installation (#851) 2025-01-10 16:12:38 +08:00
berg
fddd30916f market, bfl, app-service: added dependency checking mechanism and fixed some bugs (#849)
* feat: added dependency checking for the application and fixed some bugs

* app-service: add mandatory dep check; dequeue when app is initialized

---------

Co-authored-by: hys <hysyeah@gmail.com>
2025-01-09 23:52:49 +08:00
dkeven
5c8af06143 feat(installer): support enabling GPU on Debian & Ubuntu24 (#846) 2025-01-09 23:48:35 +08:00
dkeven
f8885ea3db fix(installer): run cuda lib script for WSL, disable uninstall cmd for WSL (#844) 2025-01-08 19:43:50 +08:00
eball
0cdcfcfb7f auth: redirect to login portal following the request of local domain (#841)
fix: redirect to login portal following the request of local domain
2025-01-08 14:45:45 +08:00
dkeven
ae78500731 fix(installer): use a global supported cuda version list (#842) 2025-01-08 14:44:00 +08:00
huaiyuan
71c24d7592 feat(Files&Vault&Wise&Files server): update LarePass new version to v1.3.14 (#836)
* feat: files server send message to frontend with nats when directory changed

* feat: update vault nats

* fix: files-frontend to vault

* feat: files frontend update data when the socket sended and add FilesDialog component

* Update files_deploy.yaml

* fix: vault server yaml

* fix: middleware operator nats mr list

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
Co-authored-by: qq815776412 <815776412@qq.com>
Co-authored-by: eball <liuy102@hotmail.com>
Co-authored-by: liuyu <>
Co-authored-by: hys <hysyeah@gmail.com>
2025-01-08 14:42:01 +08:00
dkeven
c53444b7c7 fix(installer): unify cuda support check in different tasks (#840) 2025-01-08 11:27:05 +08:00
dkeven
cd8498f3a6 fix(installer): multiple GPU-related bugs (#833) 2025-01-07 22:17:18 +08:00
hysyeah
a0e3cd7d8f image-service: fix remove custom mirror connection check;only proxy docker.io (#834) 2025-01-07 22:05:07 +08:00
aby913
a89ad94cfa fix(installer): check if PowerShell is running as an administrator (#832)
no message
2025-01-07 20:38:28 +08:00
dkeven
b20031bd17 fix(installer): invalid gpu node label value, run task without runner (#831) 2025-01-07 15:07:46 +08:00
dkeven
2c91b10136 fix(installer): properly check cuda driver & gpu plugin (#830) 2025-01-07 12:11:00 +08:00
dkeven
96a7579322 feat(installer): add gpu commands (#826)
* feat: add node selector

* feat(installer): install gpu driver & plugin by default

* fix: label bug

* fix: update installer

---------

Co-authored-by: liuyu <>
2025-01-06 23:06:11 +08:00
simon
aae7a4c21d wise: fix nginx configuration and database migration bugs (#827)
knowledge
2025-01-06 21:26:06 +08:00
aby913
2f76f98b69 fix(installer): install olares-cli.exe to the Windows global path (#823)
fix(installer): install olares-cli.exe to the Windows application directory for global access to olares-cli.exe
2025-01-06 20:13:40 +08:00
yyh
13128d2a16 fix(controlhub&dashboard): fix dashboard analytics multiple entrances and controlhub ui (#825)
fix: fix dashboard analytics multiple entrances and controlhub ui
2025-01-06 19:07:56 +08:00
simon
f9a281e789 knowledge and download: add filter and fix download bugs (#822)
knowledge v0.1.59
2025-01-04 19:53:53 +08:00
berg
78fda8a830 wise: updates upload and download functionality (#821)
feat: wise updates upload and download functionality
2025-01-04 02:26:27 +08:00
hysyeah
f7a254b82f app-service: fix api apps missing initializing state (#820) 2025-01-04 02:26:04 +08:00
wiy
cefcdd2690 revert(files-frontend): back files-frontend to files_fe_deploy (#819)
* feat: move files-frontend to system-frontend

* feat: set files-service to files1-service

* fix: files service and secret

* fix: update files-service to files-fe-service

* fix: files-fe-frontend build error

* fix: use tab error

* fix: files.conf error

* fix: files.conf server error

* revert: files_frontend and system-frontend

---------

Co-authored-by: liuyu <>
2025-01-04 02:25:41 +08:00
hysyeah
ad08b09463 app-service: add tailscale acls support for OlaresManifest.yaml (#817) 2025-01-02 23:46:33 +08:00
aby913
b00c93b85c feat(installer): add firewall settings for Windows (#816) 2025-01-02 23:45:40 +08:00
0x7fffff92
08cafd2fb5 feat(headscale): move acl.json to configmap (#815)
* feat: add acl to allow ssh for tailscale

* feat: acl using configmap

* chore: using RollingUpdate for headscale

* chore: add default acl.json configmap

---------

Co-authored-by: 0x7fffff92 <0x7fffff92@example.com>
2025-01-02 23:45:02 +08:00
wiy
703065750d feat(system-frontend): move files-frontend to system-frontend (#814)
* feat: move files-frontend to system-frontend

* feat: set files-service to files1-service

* fix: files service and secret

* fix: update files-service to files-fe-service

* fix: files-fe-frontend build error

* fix: use tab error

* fix: files.conf error

* fix: files.conf server error

---------

Co-authored-by: liuyu <>
2025-01-02 23:44:11 +08:00
salt
e71ec8d570 feat: recommend optimization (#813)
* feat: recommend optimization

* feat: recommend optimization, frontend part show debug info

---------

Co-authored-by: Ubuntu <ubuntu@ip-172-31-39-127.cluster.local>
2024-12-31 21:13:39 +08:00
fnalways
6932ab655a docs: update wording to clear confusion (#809) 2024-12-27 18:17:19 +08:00
Calvin W
351b0ee938 docs: update wording to clear confusion 2024-12-27 17:50:55 +08:00
hysyeah
f047051140 app-service: fix app suspend in os-system;image download bug (#807) 2024-12-27 15:43:50 +08:00
Ikko Eltociear Ashimine
d9b7b7549c docs: add Japanese README (#806)
I created Japanese translated README.
2024-12-27 14:43:18 +08:00
dkeven
3afd510477 feat(installer): add a separate command for all prechecks (#802)
feat: add a separate command for all prechecks
2024-12-26 20:20:45 +08:00
eball
721b3dad44 olaresd: ignore unknown graphics card (#801) 2024-12-26 20:13:20 +08:00
yyh
6b8a26231a fix(system-frontend): fix app bugs and update some ui (#798) 2024-12-26 11:45:32 +08:00
198 changed files with 25198 additions and 5087 deletions

20
.github/workflows/build-wsl2326.yaml vendored Normal file
View File

@@ -0,0 +1,20 @@
name: Build and Upload WSL MSI
on:
workflow_dispatch:
jobs:
push:
runs-on: ubuntu-latest
steps:
- name: "Checkout source code"
uses: actions/checkout@v3
# test
- env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"
run: |
bash scripts/build-wsl-install-msi.sh

View File

@@ -37,17 +37,8 @@ jobs:
run: |
bash scripts/package.sh
- name: Run chart-testing (list-changed)
id: list-changed
run: |
changed=$(ct list-changed --chart-dirs build/installer/wizard/config --target-branch ${{ github.event.repository.default_branch }})
if [[ -n "$changed" ]]; then
echo "changed=true" >> "$GITHUB_OUTPUT"
fi
- name: Run chart-testing (lint)
if: steps.list-changed.outputs.changed == 'true'
run: ct lint --chart-dirs build/installer/wizard/config --check-version-increment=false --target-branch ${{ github.event.repository.default_branch }}
run: ct lint --chart-dirs build/installer/wizard/config,build/installer/wizard/config/apps,build/installer/wizard/config/gpu --check-version-increment=false --all
# - name: Create kind cluster
# if: steps.list-changed.outputs.changed == 'true'

37
.github/workflows/daily-lint-check.yaml vendored Normal file
View File

@@ -0,0 +1,37 @@
name: Lint Check Charts
on:
schedule:
# This is a UTC time
- cron: "30 1 * * *"
workflow_dispatch:
jobs:
lint-test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Set up Helm
uses: azure/setup-helm@v3
with:
version: v3.12.1
- uses: actions/setup-python@v4
with:
python-version: '3.9'
check-latest: true
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.6.0
- name: Pre package
run: |
bash scripts/package.sh
- name: Run chart-testing (lint)
run: |
ct lint --chart-dirs build/installer/wizard/config,build/installer/wizard/config/apps,build/installer/wizard/config/gpu --check-version-increment=false --all

View File

@@ -141,6 +141,7 @@ jobs:
build/installer/publicInstaller.sh
build/installer/install.sh
build/installer/install.ps1
build/installer/joincluster.sh
build/installer/publicAddnode.sh
build/installer/version.hint
build/installer/publicRestoreInstaller.sh

View File

@@ -118,6 +118,7 @@ jobs:
build/installer/publicInstaller.latest.ps1
build/installer/install.ps1
build/installer/publicAddnode.sh
build/installer/joincluster.sh
build/installer/version.hint
build/installer/publicRestoreInstaller.sh
prerelease: true

View File

@@ -13,6 +13,7 @@
<p>
<a href="./README.md"><img alt="Readme in English" src="https://img.shields.io/badge/English-FFFFFF"></a>
<a href="./README_CN.md"><img alt="Readme in Chinese" src="https://img.shields.io/badge/简体中文-FFFFFF"></a>
<a href="./README_JP.md"><img alt="Readme in Japanese" src="https://img.shields.io/badge/日本語-FFFFFF"></a>
</p>
</div>
@@ -64,28 +65,27 @@ Here is why and where you can count on Olares for private, powerful, and secure
## Getting started
### System compatibility
Olares is available for Linux, Raspberry Pi, Mac, and Windows. It has been tested and verified on the following systems:
| Platform | Operating system | Notes |
|---------------------|--------------------------------------|-------------------------------------------------------|
| Linux | Ubuntu 20.04 LTS or later <br/> Debian 11 or later | |
| Raspberry Pi | RaspbianOS | Verified on Raspberry Pi 4 Model B and Raspberry Pi 5 |
| Windows | Windows 11 23H2 or later <br/>Windows 10 22H2 or later<br/> WSL2 | |
| Mac | Monterey (12) or later | |
| Proxmox VE (PVE) | Proxmox Virtual Environment 8.0 | |
Olares has been tested and verified on the following Linux platforms:
> **Note**
>
> If you successfully install Olares on an operating system that is not listed in the compatibility table, please let us know! You can [open an issue](https://github.com/beclab/Olares/issues/new) or submit a pull request on our GitHub repository.
- Ubuntu 20.04 LTS or later
- Debian 11 or later
> **Other installation options**
> Olares can also be installed on other platforms like macOS, Windows, PVE, and Raspberry Pi, or installed via docker compose on Linux. However, these are only for **testing and development purposes**. For detailed instructions, visit [Additional installation options](https://docs.olares.xyz/developer/install/additional-installations.html).
### Set up Olares
To get started with Olares on your own device, follow the [Getting Started Guide](https://docs.olares.xyz/manual/get-started/) for step-by-step instructions.
## Tech stacks
## Architecture
Public clouds have IaaS, PaaS, and SaaS layers. Olares provides open-source alternatives to these layers.
Olares' architecture is based on two core principles:
- Adopts an Android-like approach to control software permissions and interactivity, ensuring smooth and secure system operations.
- Leverages cloud-native technologies to manage hardware and middleware services efficiently.
![Tech Stacks](https://file.bttcdn.com/github/terminus/v2/tech-stack-olares.jpeg)
![Olares Architecture](https://file.bttcdn.com/github/terminus/v2/olares-arch-3.png)
For detailed description of each component, refer to [Olares architecture](https://docs.olares.xyz/manual/system-architecture.html).
## Features
@@ -100,42 +100,6 @@ Olares offers a wide array of features designed to enhance security, ease of use
- **Seamless anywhere access**: Access your devices from anywhere using dedicated clients for mobile, desktop, and browsers.
- **Development tools**: Comprehensive development tools for effortless application development and porting.
## Comparison with other self-hosting solutions
As an open-source sovereign cloud OS for local AI, Olares reimagines whats possible in self-hosting. To help you understand how Olares stands out in the landscape, weve created a comparison table that highlights its features alongside those of other self-hosting solutions in the market.
**Legend:**
- 🚀: **Auto**, indicates that the system completes the task automatically.
- ✅: **Yes**, indicates that users without a developer background can complete the setup through the product's UI prompts.
- 🛠️: **Manual Configuration**, indicates that even users with an engineering background need to refer to tutorials to complete the setup.
- ❌: **No**, indicates that the feature is not supported.
| | Olares | Synology | TrueNAS | CasaOS | Unraid |
| --- | --- | --- | --- | --- | --- |
| Source Code License | Olares License | Closed | GPL 3.0 | Apache 2.0 | Closed |
| Built On | Kubernetes | Linux | Kubernetes | Docker | Docker |
| Local LLM Hosting | 🚀 | 🛠️ | 🛠️ | 🛠️ | 🛠️ |
| Local LLM app development | 🚀 | 🛠️ | 🛠️ | 🛠️ | 🛠️ |
| Multi-Node | ✅ | ❌ | ✅ | ❌ | ❌ |
| Build-in Apps | ✅ (Rich desktop apps) | ✅ (Rich desktop apps) | ❌ (CLI) | ✅ (Simple desktop apps) | ✅ (Dashboard) |
| Free Domain Name | ✅ | ✅ | ❌ | ❌ | ❌ |
| Auto SSL Certificate | 🚀 | ✅ | 🛠️ | 🛠️ | 🛠️ |
| Reverse Proxy | 🚀 | ✅ | 🛠️ | 🛠️ | 🛠️ |
| VPN Management | 🚀 | 🛠️ | 🛠️ | 🛠️ | 🛠️ |
| Graded App Entrance | 🚀 | 🛠️ | 🛠️ | 🛠️ | 🛠️ |
| Multi-User Management | ✅ User management <br>🚀 Resource isolation | ✅ User management<br>🛠️ Resource isolation | ✅ User management<br>🛠️ Resource isolation | ❌ | ✅ User management <br>🛠️ Resource isolation |
| Single Login for All Apps | 🚀 | ❌ | ❌ | ❌ | ❌ |
| Cross-Node Storage | 🚀 (Juicefs+<br>MinIO) | ❌ | ❌ | ❌ | ❌ |
| Database Solution | 🚀 (Built-in cloud-native solution) | 🛠️ | 🛠️ | 🛠️ | 🛠️ |
| Disaster Recovery | 🚀 (MinIO's [**Erasure Coding**](https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html)**)** | ✅ RAID | ✅ RAID | ✅ RAID | ✅ Unraid Storage |
| Backup | ✅ App Data <br>✅ User Data | ✅ User Data | ✅ User Data | ✅ User Data | ✅ User Data |
| App Sandboxing | ✅ | ❌ | ❌ (K8S's namespace) | ❌ | ❌ |
| App Ecosystem | ✅ (Official + third-party) | ✅ (Majorly official apps) | ✅ (Official + third-party submissions) | ✅ Majorly official apps | ✅ (Community app market) |
| Developer Friendly | ✅ IDE <br>✅ CLI <br>✅ SDK <br>✅ Doc | ✅ CLI <br>✅ SDK <br>✅ Doc | ✅ CLI <br>✅ Doc | ✅ CLI <br>✅ Doc | ✅ Doc |
| Client Platforms | ✅ Android <br>✅ iOS <br>✅ Windows <br>✅ Mac <br>✅ Chrome Plugin | ✅ Android <br>✅ iOS | ❌ | ❌ | ❌ |
| Client Functionality | ✅ (All-in-one client app) | ✅ (14 separate client apps) | ❌ | ❌ | ❌ |
## Project navigation
Olares consists of numerous code repositories publicly available on GitHub. The current repository is responsible for the final compilation, packaging, installation, and upgrade of the operating system, while specific changes mostly take place in their corresponding repositories.

View File

@@ -13,6 +13,7 @@
<p>
<a href="./README.md"><img alt="Readme in English" src="https://img.shields.io/badge/English-FFFFFF"></a>
<a href="./README_CN.md"><img alt="Readme in Chinese" src="https://img.shields.io/badge/简体中文-FFFFFF"></a>
<a href="./README_JP.md"><img alt="Readme in Japanese" src="https://img.shields.io/badge/日本語-FFFFFF"></a>
</p>
</div>
@@ -61,31 +62,27 @@ Olares 是为本地端侧 AI 打造的开源私有云操作系统,可轻松将
## 快速开始
### 系统兼容性
你可以在 Linux、Raspberry Pi、Mac 和 Windows 上安装 Olares。目前已验证支持的系统环境如下
| 平台 | 操作系统 | 备注 |
|---------------------|--------------------------------------|-------------------------------------------------------|
| Linux | Ubuntu 20.04 LTS 及以上 <br/> Debian 11 及以上 | |
| Raspberry Pi | RaspbianOS | 已在 Raspberry Pi 4 Model B 和 Raspberry Pi 5 上验证 |
| Windows | Windows 11 23H2 及以上 <br/>Windows 10 22H2 及以上 <br/>WSL2 | |
| Mac | macOS Monterey (12) 及以上 | |
| Proxmox VE (PVE) | Proxmox Virtual Environment 8.0 | |
Olares 已在以下 Linux 平台完成测试与验证:
> **注意**
>
> 如果你在未列出的系统版本上成功安装了 Olares请告诉我们你可以在 GitHub 仓库中[提交 Issue](https://github.com/beclab/Olares/issues/new) 或发起 Pull Request。
- Ubuntu 20.04 LTS 及以上版本
- Debian 11 及以上版本
> **其他安装方式**
> Olares 也支持在 macOS、Windows、PVE、树莓派等平台上运行或通过 Docker Compose 在 Linux 上部署。但请注意,这些方式**仅适用于开发和测试环境**。详细安装指南请参阅[其他安装方式](https://docs.joinolares.cn/zh/developer/install/additional-installations.html)。
### 安装 Olares
> 当前文档仅有英文版本。
参考[快速上手指南](https://docs.olares.xyz/manual/get-started/)安装并激活 Olares。
参考[快速上手指南](https://docs.joinolares.cn/zh/manual/get-started/)安装并激活 Olares。
## 系统架构
Olares 的架构设计遵循两个核心原则:
- 参考 Android 模式,控制软件权限和交互性,确保系统的流畅性和安全性。
- 借鉴云原生技术,高效管理硬件和中间件服务。
## 技术栈
公有云具有基础设施即服务IaaS、平台即服务PaaS和软件即服务SaaS等层级。Olares 为这些层级提供了开源替代方案。
![架构](https://file.bttcdn.com/github/terminus/v2/olares-arch-3.png)
![技术栈](https://file.bttcdn.com/github/terminus/v2/tech-stack-olares.jpeg)
详细描述请参考 [Olares 架构](https://docs.joinolares.cn/zh/manual/system-architecture.html)文档。
## 功能特性
@@ -100,42 +97,6 @@ Olares 提供了一系列功能,旨在提升安全性、使用便捷性以及
- **无缝访问**:通过移动端、桌面端和网页浏览器客户端,从全球任何地方访问设备。
- **开发工具**:提供全面的工具支持,便于开发和移植应用,加速开发进程。
## 项目对比
Olares 作为一款面向本地 AI 的开源私有云操作系统,为自托管解决方案提供了新的视角。为了帮您快速了解 Olares 在市场中的独特优势,我们制作了一张功能比较表,详细展示了 Olares 的功能以及与市场上其他主流解决方案的对比。
**图例:**
- 🚀: **自动** - 表示系统自动完成任务。
- ✅: **支持** - 表示无开发背景的用户可以通过产品的 UI 提示完成设置。
- 🛠️: **手动配置** - 表示即使是有工程背景的用户也需要参考教程来完成设置。
- ❌: **不支持** - 表示不支持该功能。
| | Olares | 群晖 | TrueNAS | CasaOS | Unraid |
| --- | --- | --- | --- | --- | --- |
| 源代码许可证 | Olares 许可证 | 闭源 | GPL 3.0 | Apache 2.0 | 闭源 |
| 开发 | Kubernetes | Linux | Kubernetes | Docker | Docker |
| 本地 LLM 部署 | 🚀 | 🛠️ | 🛠️ | 🛠️ | 🛠️ |
| 本地 LLM 应用开发 | 🚀 | 🛠️ | 🛠️ | 🛠️ | 🛠️ |
| 多节点支持 | ✅ | ❌ | ✅ | ❌ | ❌ |
| 内置应用 | ✅(桌面应用丰富)| ✅(桌面应用丰富) | ❌ (CLI) | ✅ (桌面应用较少) | ✅(面板) |
| 免费域名 | ✅ | ✅ | ❌ | ❌ | ❌ |
| 自动 SSL 证书 | 🚀 | ✅ | 🛠️ | 🛠️ | 🛠️ |
| 反向代理 | 🚀 | ✅ | 🛠️ | 🛠️ | 🛠️ |
| VPN 管理 | 🚀 | 🛠️ | 🛠️ | 🛠️ | 🛠️ |
| 分级应用入口 | 🚀 | 🛠️ | 🛠️ | 🛠️ | 🛠️ |
| 多用户管理 | ✅ 用户管理 <br>🚀 资源隔离 | ✅ 用户管理 <br>🛠️ 资源隔离 | ✅ 用户管理<br>🛠️ 资源隔离 | ❌ | ✅ 用户管理 <br>🛠️ 资源隔离 |
| 单一登录 | 🚀 | ❌ | ❌ | ❌ | ❌ |
| 跨节点存储 | 🚀 (Juicefs+<br>MinIO) | ❌ | ❌ | ❌ | ❌ |
| 数据库解决方案 | 🚀 (内置云原生解决方案) | 🛠️ | 🛠️ | 🛠️ | 🛠️ |
| 灾难恢复 | 🚀 (MinIO的[**纠错码**](https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html)**)** | ✅ RAID | ✅ RAID | ✅ RAID | ✅ Unraid Storage |
| 备份 | ✅ 应用数据 <br>✅ 用户数据 | ✅ 用户数据 | ✅ 用户数据 | ✅ 用户数据 | ✅ 用户数据 |
| 应用沙盒 | ✅ | ❌ | ❌ K8S的命名空间 | ❌ | ❌ |
| 应用生态系统 | ✅ (官方 + 第三方应用) | ✅ (官方应用为主) | ✅ (官方应用 + 第三方提交)| ✅ (官方应用为主) | ✅ (社区应用市场) |
| 开发者友好 | ✅ IDE <br>✅ CLI <br>✅ SDK <br>✅ 文档| ✅ CLI <br>✅ SDK <br>✅ 文档 | ✅ CLI <br>✅ 文档 | ✅ CLI <br>✅ 文档 | ✅ 文档 |
| 客户端 | ✅ Android <br>✅ iOS <br>✅ Windows <br>✅ Mac <br>✅ Chrome 插件 | ✅ Android <br>✅ iOS | ❌ | ❌ | ❌ |
| 客户端功能 | ✅ (一体化客户端应用) | ✅ 14个分散的客户端应用| ❌ | ❌ | ❌ |
## 项目目录
Olares 包含多个在 GitHub 上公开可用的代码仓库。当前仓库负责操作系统的最终编译、打包、安装和升级,而特定的更改主要在各自对应的仓库中进行。

193
README_JP.md Normal file
View File

@@ -0,0 +1,193 @@
<div align="center">
# Olares: ローカルAIのためのオープンソース主権クラウドOS<!-- omit in toc -->
[![Mission](https://img.shields.io/badge/Mission-Let%20people%20own%20their%20data%20again-purple)](#)<br/>
[![Last Commit](https://img.shields.io/github/last-commit/beclab/olares)](https://github.com/beclab/olares/commits/main)
![Build Status](https://github.com/beclab/olares/actions/workflows/release-daily.yaml/badge.svg)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/beclab/olares)](https://github.com/beclab/olares/releases)
[![GitHub Repo stars](https://img.shields.io/github/stars/beclab/olares?style=social)](https://github.com/beclab/olares/stargazers)
[![Discord](https://img.shields.io/badge/Discord-7289DA?logo=discord&logoColor=white)](https://discord.com/invite/BzfqrgQPDK)
[![License](https://img.shields.io/badge/License-Olares-darkblue)](https://github.com/beclab/olares/blob/main/LICENSE.md)
<p>
<a href="./README.md"><img alt="Readme in English" src="https://img.shields.io/badge/English-FFFFFF"></a>
<a href="./README_CN.md"><img alt="Readme in Chinese" src="https://img.shields.io/badge/简体中文-FFFFFF"></a>
<a href="./README_JP.md"><img alt="Readme in Japanese" src="https://img.shields.io/badge/日本語-FFFFFF"></a>
</p>
</div>
https://github.com/user-attachments/assets/3089a524-c135-4f96-ad2b-c66bf4ee7471
*Olaresを使って、ローカルAIアシスタントを構築し、データを場所を問わず同期し、ワークスペースをセルフホストし、独自のメディアをストリーミングし、その他多くのことを実現できます。*
<p align="center">
<a href="https://olares.xyz">ウェブサイト</a> ·
<a href="https://docs.olares.xyz">ドキュメント</a> ·
<a href="https://olares.xyz/larepass">LarePassをダウンロード</a> ·
<a href="https://github.com/beclab/apps">Olaresアプリ</a> ·
<a href="https://space.olares.xyz">Olares Space</a>
</p>
> [!IMPORTANT]
> 最近、TerminusからOlaresへのリブランディングを完了しました。詳細については、[リブランディングブログ](https://blog.olares.xyz/terminus-is-now-olares/)をご覧ください。
Olaresを使用して、ハードウェアをAIホームサーバーに変換します。Olaresは、ローカルAIのためのオープンソース主権クラウドOSです。
- **最先端のAIモデルを自分の条件で実行**: LLaMA、Stable Diffusion、Whisper、Flux.1などの強力なオープンAIモデルをハードウェア上で簡単にホストし、AI環境を完全に制御します。
- **簡単にデプロイ**: Olares Marketから幅広いオープンソースAIアプリを数クリックで発見してインストールします。複雑な設定やセットアップは不要です。
- **いつでもどこでもアクセス**: ブラウザを通じて、必要なときにAIアプリやモデルにアクセスします。
- **統合されたAIでスマートなAI体験**: [Model Context Protocol](https://spec.modelcontextprotocol.io/specification/)MCPに似たメカニズムを使用して、OlaresはAIモデルとAIアプリ、およびプライベートデータセットをシームレスに接続します。これにより、ニーズに応じて適応する高度にパーソナライズされたコンテキスト対応のAIインタラクションが実現します。
> 🌟 *新しいリリースや更新についての通知を受け取るために、スターを付けてください。*
## なぜOlaresなのか
以下の理由とシナリオで、Olaresはプライベートで強力かつ安全な主権クラウド体験を提供します
🤖 **エッジAI**: 最先端のオープンAIモデルをローカルで実行し、大規模言語モデル、コンピュータビジョン、音声認識などを含みます。データに合わせてプライベートAIサービスを作成し、機能性とプライバシーを向上させます。<br>
📊 **個人データリポジトリ**: 重要なファイル、写真、ドキュメントを安全に保存し、デバイスや場所を問わず同期および管理します。<br>
🚀 **セルフホストワークスペース**: 安全なオープンソースSaaS代替品を使用して、チームのための無料のコラボレーションワークスペースを構築します。<br>
🎥 **プライベートメディアサーバー**: 個人のメディアコレクションをホストし、独自のストリーミングサービスを提供します。<br>
🏡 **スマートホームハブ**: IoTデバイスやホームオートメーションの中央制御ポイントを作成します。<br>
🤝 **ユーザー所有の分散型ソーシャルメディア**: Mastodon、Ghost、WordPressなどの分散型ソーシャルメディアアプリをOlaresに簡単にインストールし、プラットフォームの手数料やアカウント停止のリスクなしに個人ブランドを構築します。<br>
📚 **学習プラットフォーム**: セルフホスティング、コンテナオーケストレーション、クラウド技術を実践的に学びます。
## はじめに
### システム互換性
Olaresは以下のLinuxプラットフォームで動作検証を完了しています
- Ubuntu 20.04 LTS 以降
- Debian 11 以降
> **追加インストール手順**
> Olares は macOS、Windows、PVE、Raspberry Pi などのプラットフォームや、Linux 上での Docker Compose を用いたインストールにも対応しています。>ただし、これらの方法は開発およびテスト環境専用です。詳しくは[追加インストール手順](https://docs.olares.xyz/developer/install/additional-installations.html)をご参照ください。
### Olaresのセットアップ
自分のデバイスでOlaresを始めるには、[はじめにガイド](https://docs.olares.xyz/manual/get-started/)に従ってステップバイステップの手順を確認してください。
## アーキテクチャ
Olaresのアーキテクチャは、次の2つの基本原則に基づいています
- Androidの設計思想を取り入れ、ソフトウェアの権限と対話性を制御することで、システムの安全かつ円滑な運用を実現します。
- クラウドネイティブ技術を活用し、ハードウェアとミドルウェアサービスを効率的に管理します。
![Olaresのアーキテクチ](https://file.bttcdn.com/github/terminus/v2/olares-arch-3.png)
各コンポーネントの詳細については、[Olares アーキテクチャ](https://docs.olares.xyz/manual/system-architecture.html)(英語版)をご参照ください。
## 機能
Olaresは、セキュリティ、使いやすさ、開発の柔軟性を向上させるための幅広い機能を提供します
- **エンタープライズグレードのセキュリティ**: Tailscale、Headscale、Cloudflare Tunnel、FRPを使用してネットワーク構成を簡素化します。
- **安全で許可のないアプリケーションエコシステム**: サンドボックス化によりアプリケーションの分離とセキュリティを確保します。
- **統一ファイルシステムとデータベース**: 自動スケーリング、バックアップ、高可用性を提供します。
- **シングルサインオン**: 一度ログインするだけで、Olares内のすべてのアプリケーションに共有認証サービスを使用してアクセスできます。
- **AI機能**: GPU管理、ローカルAIモデルホスティング、プライベートナレッジベースの包括的なソリューションを提供し、データプライバシーを維持します。
- **内蔵アプリケーション**: ファイルマネージャー、同期ドライブ、ボールト、リーダー、アプリマーケット、設定、ダッシュボードを含みます。
- **どこからでもシームレスにアクセス**: モバイル、デスクトップ、ブラウザ用の専用クライアントを使用して、どこからでもデバイスにアクセスできます。
- **開発ツール**: アプリケーションの開発と移植を容易にする包括的な開発ツールを提供します。
## プロジェクトナビゲーション
Olaresは、GitHubで公開されている多数のコードリポジトリで構成されています。現在のリポジトリは、オペレーティングシステムの最終コンパイル、パッケージング、インストール、およびアップグレードを担当しており、特定の変更は主に対応するリポジトリで行われます。
以下の表は、Olaresのプロジェクトディレクトリと対応するリポジトリを一覧にしたものです。興味のあるものを見つけてください
<details>
<summary><b>フレームワークコンポーネント</b></summary>
| ディレクトリ | リポジトリ | 説明 |
| --- | --- | --- |
| [frameworks/app-service](https://github.com/beclab/olares/tree/main/frameworks/app-service) | <https://github.com/beclab/app-service> | システムフレームワークコンポーネントで、システム内のすべてのアプリのライフサイクル管理とさまざまなセキュリティ制御を提供します。 |
| [frameworks/backup-server](https://github.com/beclab/olares/tree/main/frameworks/backup-server) | <https://github.com/beclab/backup-server> | システムフレームワークコンポーネントで、定期的なフルまたは増分クラスターのバックアップサービスを提供します。 |
| [frameworks/bfl](https://github.com/beclab/olares/tree/main/frameworks/bfl) | <https://github.com/beclab/bfl> | ランチャーのバックエンドBFL、ユーザーアクセスポイントとして機能し、さまざまなバックエンドサービスのインターフェースを集約およびプロキシします。 |
| [frameworks/GPU](https://github.com/beclab/olares/tree/main/frameworks/GPU) | <https://github.com/grgalex/nvshare> | 複数のプロセスまたはKubernetes上で実行されるコンテナが同じ物理GPU上で同時に安全に実行できるようにするGPU共有メカニズムで、各プロセスが全GPUメモリを利用できます。 |
| [frameworks/l4-bfl-proxy](https://github.com/beclab/olares/tree/main/frameworks/l4-bfl-proxy) | <https://github.com/beclab/l4-bfl-proxy> | BFLの第4層ネットワークプロキシ。SNIを事前に読み取ることで、ユーザーのIngressに通過する動的ルートを提供します。 |
| [frameworks/osnode-init](https://github.com/beclab/olares/tree/main/frameworks/osnode-init) | <https://github.com/beclab/osnode-init> | 新しいノードがクラスターに参加する際にノードデータを初期化するシステムフレームワークコンポーネント。 |
| [frameworks/system-server](https://github.com/beclab/olares/tree/main/frameworks/system-server) | <https://github.com/beclab/system-server> | システムランタイムフレームワークの一部として、アプリ間のセキュリティコールのメカニズムを提供します。 |
| [frameworks/tapr](https://github.com/beclab/olares/tree/main/frameworks/tapr) | <https://github.com/beclab/tapr> | Olaresアプリケーションランタイムコンポーネント。 |
</details>
<details>
<summary><b>システムレベルのアプリケーションとサービス</b></summary>
| ディレクトリ | リポジトリ | 説明 |
| --- | --- | --- |
| [apps/analytic](https://github.com/beclab/olares/tree/main/apps/analytic) | <https://github.com/beclab/analytic> | [Umami](https://github.com/umami-software/umami)に基づいて開発されたAnalyticは、Google Analyticsのシンプルで高速、プライバシー重視の代替品です。 |
| [apps/market](https://github.com/beclab/olares/tree/main/apps/market) | <https://github.com/beclab/market> | このリポジトリは、Olaresのアプリケーションマーケットのフロントエンド部分をデプロイします。 |
| [apps/market-server](https://github.com/beclab/olares/tree/main/apps/market-server) | <https://github.com/beclab/market> | このリポジトリは、Olaresのアプリケーションマーケットのバックエンド部分をデプロイします。 |
| [apps/argo](https://github.com/beclab/olares/tree/main/apps/argo) | <https://github.com/argoproj/argo-workflows> | ローカル推奨アルゴリズムのコンテナ実行をオーケストレーションするワークフローエンジン。 |
| [apps/desktop](https://github.com/beclab/olares/tree/main/apps/desktop) | <https://github.com/beclab/desktop> | システムの内蔵デスクトップアプリケーション。 |
| [apps/devbox](https://github.com/beclab/olares/tree/main/apps/devbox) | <https://github.com/beclab/devbox> | Olaresアプリケーションの移植と開発のための開発者向けIDE。 |
| [apps/vault](https://github.com/beclab/olares/tree/main/apps/vault) | <https://github.com/beclab/termipass> | [Padloc](https://github.com/padloc/padloc)に基づいて開発された、あらゆる規模のチームや企業向けの無料の1PasswordおよびBitwardenの代替品。DID、Olares ID、およびOlaresデバイスの管理を支援するクライアントとして機能します。 |
| [apps/files](https://github.com/beclab/olares/tree/main/apps/files) | <https://github.com/beclab/files> | [Filebrowser](https://github.com/filebrowser/filebrowser)から変更された内蔵ファイルマネージャーで、Drive、Sync、およびさまざまなOlares物理ード上のファイルの管理を提供します。 |
| [apps/notifications](https://github.com/beclab/olares/tree/main/apps/notifications) | <https://github.com/beclab/notifications> | Olaresの通知システム |
| [apps/profile](https://github.com/beclab/olares/tree/main/apps/profile) | <https://github.com/beclab/profile> | OlaresのLinktree代替品 |
| [apps/rsshub](https://github.com/beclab/olares/tree/main/apps/rsshub) | <https://github.com/beclab/rsshub> | [RssHub](https://github.com/DIYgod/RSSHub)に基づいたRSS購読管理ツール。 |
| [apps/settings](https://github.com/beclab/olares/tree/main/apps/settings) | <https://github.com/beclab/settings> | 内蔵システム設定。 |
| [apps/system-apps](https://github.com/beclab/olares/tree/main/apps/system-apps) | <https://github.com/beclab/system-apps> | _kubesphere/console_プロジェクトに基づいて構築されたsystem-serviceは、視覚的なダッシュボードと機能豊富なControlHubを通じて、システムの実行状態とリソース使用状況を理解し、制御するためのセルフホストクラウドプラットフォームを提供します。 |
| [apps/wizard](https://github.com/beclab/olares/tree/main/apps/wizard) | <https://github.com/beclab/wizard> | ユーザーにシステムのアクティベーションプロセスを案内するウィザードアプリケーション。 |
</details>
<details>
<summary><b>サードパーティコンポーネントとサービス</b></summary>
| ディレクトリ | リポジトリ | 説明 |
| --- | --- | --- |
| [third-party/authelia](https://github.com/beclab/olares/tree/main/third-party/authelia) | <https://github.com/beclab/authelia> | Webポータルを介してアプリケーションに二要素認証とシングルサインオンSSOを提供するオープンソースの認証および認可サーバー。 |
| [third-party/headscale](https://github.com/beclab/olares/tree/main/third-party/headscale) | <https://github.com/beclab/headscale> | OlaresでのTailscaleコントロールサーバーのオープンソース自ホスト実装で、LarePassで異なるデバイス間でTailscaleを管理します。 |
| [third-party/infisical](https://github.com/beclab/olares/tree/main/third-party/infisical) | <https://github.com/beclab/infisical> | チーム/インフラストラクチャ間でシークレットを同期し、シークレットの漏洩を防ぐオープンソースのシーク<E383BC><E382AF><EFBFBD><E38383>管理プラットフォーム。 |
| [third-party/juicefs](https://github.com/beclab/olares/tree/main/third-party/juicefs) | <https://github.com/beclab/juicefs-ext> | RedisとS3の上に構築された分散POSIXファイルシステムで、異なるード上のアプリがPOSIXインターフェースを介して同じデータにアクセスできるようにします。 |
| [third-party/ks-console](https://github.com/beclab/olares/tree/main/third-party/ks-console) | <https://github.com/kubesphere/console> | Web GUIを介してクラスター管理を可能にするKubesphereコンソール。 |
| [third-party/ks-installer](https://github.com/beclab/olares/tree/main/third-party/ks-installer) | <https://github.com/beclab/ks-installer-ext> | クラスターリソース定義に基づいて自動的にKubesphereクラスターを作成するKubesphereインストーラーコンポーネント。 |
| [third-party/kube-state-metrics](https://github.com/beclab/olares/tree/main/third-party/kube-state-metrics) | <https://github.com/beclab/kube-state-metrics> | kube-state-metricsKSMは、Kubernetes APIサーバーをリッスンし、オブジェクトの状態に関するメトリックを生成するシンプルなサービスです。 |
| [third-party/notification-manager](https://github.com/beclab/olares/tree/main/third-party/notification-manager) | <https://github.com/beclab/notification-manager-ext> | 複数の通知チャネルの統一管理と通知内容のカスタム集約を提供するKubesphereの通知管<E79FA5><E7AEA1>コンポーネント。 |
| [third-party/predixy](https://github.com/beclab/olares/tree/main/third-party/predixy) | <https://github.com/beclab/predixy> | 利用可能なードを自動的に識別し、名前空間の分離を追加するRedisクラスターのプロキシサービス。 |
| [third-party/redis-cluster-operator](https://github.com/beclab/olares/tree/main/third-party/redis-cluster-operator) | <https://github.com/beclab/redis-cluster-operator> | Kubernetesに基づいてRedisクラスターを作成および管理するためのクラウドネイティブツール。 |
| [third-party/seafile-server](https://github.com/beclab/olares/tree/main/third-party/seafile-server) | <https://github.com/beclab/seafile-server> | データストレージを処理するSeafile同期ドライブのバックエンドサービス。 |
| [third-party/seahub](https://github.com/beclab/olares/tree/main/third-party/seahub) | <https://github.com/beclab/seahub> | ファイル共有、データ同期などを処理するSeafile同期ドライブのフロントエンドおよびミドルウェアサービス。 |
| [third-party/tailscale](https://github.com/beclab/olares/tree/main/third-party/tailscale) | <https://github.com/tailscale/tailscale> | TailscaleはすべてのプラットフォームのLarePassに統合されています。 |
</details>
<details>
<summary><b>追加のライブラリとコンポーネント</b></summary>
| ディレクトリ | リポジトリ | 説明 |
| --- | --- | --- |
| [build/installer](https://github.com/beclab/olares/tree/main/build/installer) | | インストーラービルドを生成するためのテンプレート。 |
| [build/manifest](https://github.com/beclab/olares/tree/main/build/manifest) | | インストールビルドイメージリストテンプレート。 |
| [libs/fs-lib](https://github.com/beclab/olares/tree/main/libs) | <https://github.com/beclab/fs-lib> | JuiceFSに基づいて実装されたiNotify互換インターフェースのSDKライブラリ。 |
| [scripts](https://github.com/beclab/olares/tree/main/scripts) | | インストーラービルドを生成するための補助スクリプト。 |
</details>
## Olaresへの貢献
あらゆる形での貢献を歓迎します:
- Olaresで独自のアプリケーションを開発したい場合は、以下を参照してください<br>
https://docs.olares.xyz/developer/develop/
- Olaresの改善に協力したい場合は、以下を参照してください<br>
https://docs.olares.xyz/developer/contribute/olares.html
## コミュニティと連絡先
* [**GitHub Discussion**](https://github.com/beclab/olares/discussions). フィードバックの共有や質問に最適です。
* [**GitHub Issues**](https://github.com/beclab/olares/issues). Olaresの使用中に遭遇したバグの報告や機能提案の提出に最適です。
* [**Discord**](https://discord.com/invite/BzfqrgQPDK). Olaresに関するあらゆることを共有するのに最適です。
## 特別な感謝
Olaresプロジェクトは、次のような多数のサードパーティオープンソースプロジェクトを統合しています[Kubernetes](https://kubernetes.io/)、[Kubesphere](https://github.com/kubesphere/kubesphere)、[Padloc](https://padloc.app/)、[K3S](https://k3s.io/)、[JuiceFS](https://github.com/juicedata/juicefs)、[MinIO](https://github.com/minio/minio)、[Envoy](https://github.com/envoyproxy/envoy)、[Authelia](https://github.com/authelia/authelia)、[Infisical](https://github.com/Infisical/infisical)、[Dify](https://github.com/langgenius/dify)、[Seafile](https://github.com/haiwen/seafile)、[HeadScale](https://headscale.net/)、 [tailscale](https://tailscale.com/)、[Redis Operator](https://github.com/spotahome/redis-operator)、[Nitro](https://nitro.jan.ai/)、[RssHub](http://rsshub.app/)、[predixy](https://github.com/joyieldInc/predixy)、[nvshare](https://github.com/grgalex/nvshare)、[LangChain](https://www.langchain.com/)、[Quasar](https://quasar.dev/)、[TrustWallet](https://trustwallet.com/)、[Restic](https://restic.net/)、[ZincSearch](https://zincsearch-docs.zinc.dev/)、[filebrowser](https://filebrowser.org/)、[lego](https://go-acme.github.io/lego/)、[Velero](https://velero.io/)、[s3rver](https://github.com/jamhall/s3rver)、[Citusdata](https://www.citusdata.com/)。

View File

@@ -0,0 +1,67 @@
{{- $namespace := printf "%s" "os-system" -}}
{{- $rss_secret := (lookup "v1" "Secret" $namespace "rss-secrets") -}}
{{- $password := "" -}}
{{ if $rss_secret -}}
{{ $password = (index $rss_secret "data" "pg_password") }}
{{ else -}}
{{ $password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $redis_password := "" -}}
{{ if $rss_secret -}}
{{ $redis_password = (index $rss_secret "data" "redis_password") }}
{{ else -}}
{{ $redis_password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $redis_password_data := "" -}}
{{ $redis_password_data = $redis_password | b64dec }}
{{- $pg_password_data := "" -}}
{{ $pg_password_data = $password | b64dec }}
{{- $pg_user := printf "%s" "argo_os_system" -}}
{{- $pg_user = $pg_user | b64enc -}}
---
apiVersion: v1
kind: Secret
metadata:
name: rss-secrets
namespace: os-system
type: Opaque
data:
pg_user: {{ $pg_user }}
pg_password: {{ $password }}
redis_password: {{ $redis_password }}
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: rss-pg
namespace: os-system
spec:
app: rss
appNamespace: os-system
middleware: postgres
postgreSQL:
user: argo_os_system
password:
valueFrom:
secretKeyRef:
key: pg_password
name: rss-secrets
databases:
- name: rss
- name: rss_v1
- name: argo

View File

@@ -0,0 +1,26 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: os-system:argoworkflows
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: argoworkflows
subjects:
- kind: ServiceAccount
name: argoworkflows
namespace: os-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: os-system:argoworkflows-cluster-template
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: argoworkflows-cluster-template
subjects:
- kind: ServiceAccount
name: argoworkflows
namespace: os-system

View File

@@ -0,0 +1,85 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: argoworkflows
namespace: os-system
labels:
app: argoworkflows
app.kubernetes.io/managed-by: Helm
annotations:
applications.app.bytetrade.io/icon: https://argoproj.github.io/argo-workflows/assets/logo.png
applications.app.bytetrade.io/title: argoworkflows
applications.app.bytetrade.io/version: '0.35.0'
spec:
selector:
matchLabels:
app: argoworkflows
template:
metadata:
labels:
app: argoworkflows
spec:
serviceAccountName: argoworkflows
containers:
- name: argo-server
image: quay.io/argoproj/argocli:v3.5.0
imagePullPolicy: IfNotPresent
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
args:
- server
- --configmap=argoworkflow-workflow-controller-configmap
- "--auth-mode=server"
- "--secure=false"
- "--x-frame-options="
- "--loglevel"
- "debug"
- "--gloglevel"
- "0"
- "--log-format"
- "text"
ports:
- name: web
containerPort: 2746
readinessProbe:
httpGet:
path: /
port: 2746
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 20
env:
- name: IN_CLUSTER
value: "true"
- name: ARGO_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: BASE_HREF
value: /
volumeMounts:
- name: tmp
mountPath: /tmp
volumes:
- name: tmp
emptyDir: {}
nodeSelector:
kubernetes.io/os: linux
tolerations:
- key: node.kubernetes.io/not-ready
operator: Exists
effect: NoExecute
tolerationSeconds: 300
- key: node.kubernetes.io/unreachable
operator: Exists
effect: NoExecute
tolerationSeconds: 300

View File

@@ -0,0 +1,6 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: argoworkflows
namespace: os-system

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: argoworkflows-svc
namespace: os-system
spec:
ports:
- port: 2746
name: http
protocol: TCP
targetPort: 2746
selector:
app: argoworkflows
sessionAffinity: None
type: ClusterIP

View File

@@ -0,0 +1,40 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: argoworkflow-workflow-controller-configmap
namespace: os-system
data:
config: |
instanceID: os-system
artifactRepository:
archiveLogs: true
s3:
accessKeySecret:
key: AWS_ACCESS_KEY_ID
name: argo-workflow-log-fakes3
secretKeySecret:
key: AWS_SECRET_ACCESS_KEY
name: argo-workflow-log-fakes3
bucket: mongo-backup
endpoint: tapr-s3-svc:4568
insecure: true
persistence:
connectionPool:
maxIdleConns: 5
maxOpenConns: 0
archive: true
archiveTTL: 5d
postgresql:
host: citus-headless.os-system
port: 5432
database: os_system_argo
tableName: argo_workflows
userNameSecret:
name: rss-secrets
key: pg_user
passwordSecret:
name: rss-secrets
key: pg_password
nodeEvents:
enabled: true

View File

@@ -0,0 +1,27 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: os-system:argoworkflow-workflow-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: argoworkflow-workflow-controller
subjects:
- kind: ServiceAccount
name: argoworkflow-workflow-controller
namespace: os-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: os-system:argoworkflow-workflow-controller-cluster-template
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: argoworkflow-workflow-controller-cluster-template
subjects:
- kind: ServiceAccount
name: argoworkflow-workflow-controller
namespace: os-system

View File

@@ -0,0 +1,89 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: argoworkflow-workflow-controller
namespace: os-system
labels:
app.kubernetes.io/component: workflow-controller
app.kubernetes.io/instance: argo
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: argoworkflows-workflow-controller
app.kubernetes.io/part-of: argo-workflows
app.kubernetes.io/version: v3.5.0
helm.sh/chart: argoworkflows-0.35.0
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: argo
app.kubernetes.io/name: argoworkflows-workflow-controller
template:
metadata:
labels:
app.kubernetes.io/component: workflow-controller
app.kubernetes.io/instance: argo
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: argoworkflows-workflow-controller
app.kubernetes.io/part-of: argo-workflows
app.kubernetes.io/version: v3.5.0
helm.sh/chart: argoworkflows-0.35.0
spec:
serviceAccountName: argoworkflow-workflow-controller
serviceAccount: argoworkflow-workflow-controller
schedulerName: default-scheduler
containers:
- name: controller
image: quay.io/argoproj/workflow-controller:v3.5.0
imagePullPolicy: IfNotPresent
command: [ "workflow-controller" ]
args:
- "--configmap"
- "argoworkflow-workflow-controller-configmap"
- "--executor-image"
- "quay.io/argoproj/argoexec:v3.5.0"
- "--loglevel"
- "debug"
- "--gloglevel"
- "0"
- "--log-format"
- "text"
securityContext:
capabilities:
drop:
- ALL
runAsNonRoot: true
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
env:
- name: ARGO_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: LEADER_ELECTION_IDENTITY
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
ports:
- name: metrics
containerPort: 9090
protocol: TCP
- containerPort: 6060
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 6060
scheme: HTTP
initialDelaySeconds: 90
timeoutSeconds: 30
periodSeconds: 60
successThreshold: 1
failureThreshold: 3
nodeSelector:
kubernetes.io/os: linux

View File

@@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: argoworkflow-workflow-controller
namespace: os-system

View File

@@ -5,7 +5,7 @@ apiVersion: v1
kind: Secret
metadata:
name: argo-workflow-log-fakes3
namespace: {{ .Release.Namespace }}
namespace: os-system
type: Opaque
stringData:
AWS_ACCESS_KEY_ID: S3RVER
@@ -16,7 +16,7 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: workflow-role
namespace: {{ .Release.Namespace }}
namespace: os-system
rules:
- apiGroups:
- "*"
@@ -30,10 +30,10 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: workflow-rolebinding
namespace: {{ .Release.Namespace }}
namespace: os-system
subjects:
- kind: ServiceAccount
namespace: {{ .Release.Namespace }}
namespace: os-system
name: default
roleRef:
kind: Role

View File

@@ -0,0 +1,13 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: argoworkflow-workflow
namespace: os-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: argoworkflow-workflow
subjects:
- kind: ServiceAccount
name: argo-workflow
namespace: os-system

View File

@@ -1,10 +1,8 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ template "argo-workflows.fullname" $ }}-workflow
labels:
{{- include "argo-workflows.labels" (dict "context" $ "component" $.Values.controller.name "name" $.Values.controller.name) | nindent 4 }}
namespace: {{ $.Release.Namespace}}
name: argoworkflow-workflow
namespace: os-system
rules:
- apiGroups:
- ""

View File

@@ -1,5 +1,5 @@
apiVersion: v2
name: rss
name: argo
description: A Helm chart for Kubernetes
maintainers:
- name: bytetrade

View File

@@ -1,39 +0,0 @@
apiVersion: v2
name: argoworkflows
description: A Helm chart for Argo Workflows
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.35.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "v3.5.0"
icon: https://argoproj.github.io/argo-workflows/assets/logo.png
home: https://github.com/argoproj/argo-helm
sources:
- https://github.com/argoproj/argo-workflows
maintainers:
- name: argoproj
url: https://argoproj.github.io/
annotations:
artifacthub.io/signKey: |
fingerprint: 2B8F22F57260EFA67BE1C5824B11F800CD9D2252
url: https://argoproj.github.io/argo-helm/pgp_keys.asc
artifacthub.io/changes: |
- kind: changed
description: Upgrade to Argo Workflows v3.4.10

View File

@@ -1,7 +0,0 @@
1. Get Argo Server external IP/domain by running:
kubectl --namespace {{ .Release.Namespace }} get services -o wide | grep {{ template "argo-workflows.server.fullname" . }}
2. Submit the hello-world workflow by running:
argo submit https://raw.githubusercontent.com/argoproj/argo-workflows/master/examples/hello-world.yaml --watch

View File

@@ -1,189 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Create argo workflows server name and version as used by the chart label.
*/}}
{{- define "argo-workflows.server.fullname-bak" -}}
{{- printf "%s-%s" (include "argo-workflows.fullname" .) .Values.server.name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "argo-workflows.server.fullname" -}}
argoworkflows
{{- end -}}
{{/*
Create controller name and version as used by the chart label.
*/}}
{{- define "argo-workflows.controller.fullname" -}}
{{- printf "%s-%s" (include "argo-workflows.fullname" .) .Values.controller.name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Expand the name of the chart.
*/}}
{{- define "argo-workflows.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{/*{{- define "argo-workflows.fullname" -}}*/}}
{{/*{{- if .Values.fullnameOverride -}}*/}}
{{/*{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}*/}}
{{/*{{- else -}}*/}}
{{/*{{- $name := default .Chart.Name .Values.nameOverride -}}*/}}
{{/*{{- if contains $name .Release.Name -}}*/}}
{{/*{{- .Release.Name | trunc 63 | trimSuffix "-" -}}*/}}
{{/*{{- else -}}*/}}
{{/*{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}*/}}
{{/*{{- end -}}*/}}
{{/*{{- end -}}*/}}
{{/*{{- end -}}*/}}
{{- define "argo-workflows.fullname" -}}
argoworkflow
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "argo-workflows.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create kubernetes friendly chart version label for the controller.
Examples:
image.tag = v3.4.4
output = v3.4.4
image.tag = v3.4.4@sha256:d06860f1394a94ac3ff8401126ef32ba28915aa6c3c982c7e607ea0b4dadb696
output = v3.4.4
*/}}
{{- define "argo-workflows.controller_chart_version_label" -}}
{{- regexReplaceAll "[^a-zA-Z0-9-_.]+" (regexReplaceAll "@sha256:[a-f0-9]+" (default (include "argo-workflows.defaultTag" .) .Values.controller.image.tag) "") "" | trunc 63 | quote -}}
{{- end -}}
{{/*
Create kubernetes friendly chart version label for the server.
Examples:
image.tag = v3.4.4
output = v3.4.4
image.tag = v3.4.4@sha256:d06860f1394a94ac3ff8401126ef32ba28915aa6c3c982c7e607ea0b4dadb696
output = v3.4.4
*/}}
{{- define "argo-workflows.server_chart_version_label" -}}
{{- regexReplaceAll "[^a-zA-Z0-9-_.]+" (regexReplaceAll "@sha256:[a-f0-9]+" (default (include "argo-workflows.defaultTag" .) .Values.server.image.tag) "") "" | trunc 63 | quote -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "argo-workflows.labels" -}}
helm.sh/chart: {{ include "argo-workflows.chart" .context }}
{{ include "argo-workflows.selectorLabels" (dict "context" .context "component" .component "name" .name) }}
app.kubernetes.io/managed-by: {{ .context.Release.Service }}
app.kubernetes.io/part-of: argo-workflows
{{- end }}
{{/*
Selector labels
*/}}
{{- define "argo-workflows.selectorLabels" -}}
{{- if .name -}}
app.kubernetes.io/name: {{ include "argo-workflows.name" .context }}-{{ .name }}
{{ end -}}
app.kubernetes.io/instance: {{ .context.Release.Name }}
{{- if .component }}
app.kubernetes.io/component: {{ .component }}
{{- end }}
{{- end }}
{{/*
Create the name of the server service account to use
*/}}
{{- define "argo-workflows.serverServiceAccountName" -}}
{{- if .Values.server.serviceAccount.create -}}
{{ default (include "argo-workflows.server.fullname" .) .Values.server.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.server.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{/*
Create the name of the controller service account to use
*/}}
{{- define "argo-workflows.controllerServiceAccountName" -}}
{{- if .Values.controller.serviceAccount.create -}}
{{ default (include "argo-workflows.controller.fullname" .) .Values.controller.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.controller.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for ingress
*/}}
{{- define "argo-workflows.ingress.apiVersion" -}}
{{- if semverCompare "<1.14-0" (include "argo-workflows.kubeVersion" $) -}}
{{- print "extensions/v1beta1" -}}
{{- else if semverCompare "<1.19-0" (include "argo-workflows.kubeVersion" $) -}}
{{- print "networking.k8s.io/v1beta1" -}}
{{- else -}}
{{- print "networking.k8s.io/v1" -}}
{{- end -}}
{{- end -}}
{{/*
Return the target Kubernetes version
*/}}
{{- define "argo-workflows.kubeVersion" -}}
{{- default .Capabilities.KubeVersion.Version .Values.kubeVersionOverride }}
{{- end -}}
{{/*
Return the default Argo Workflows app version
*/}}
{{- define "argo-workflows.defaultTag" -}}
{{- default .Chart.AppVersion .Values.images.tag }}
{{- end -}}
{{/*
Return full image name including or excluding registry based on existence
*/}}
{{- define "argo-workflows.image" -}}
{{- if and .image.registry .image.repository -}}
{{ .image.registry }}/{{ .image.repository }}
{{- else -}}
{{ .image.repository }}
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for autoscaling
*/}}
{{- define "argo-workflows.apiVersion.autoscaling" -}}
{{- if .Values.apiVersionOverrides.autoscaling -}}
{{- print .Values.apiVersionOverrides.autoscaling -}}
{{- else if semverCompare "<1.23-0" (include "argo-workflows.kubeVersion" .) -}}
{{- print "autoscaling/v2beta1" -}}
{{- else -}}
{{- print "autoscaling/v2" -}}
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for GKE resources
*/}}
{{- define "argo-workflows.apiVersions.cloudgoogle" -}}
{{- if .Values.apiVersionOverrides.cloudgoogle -}}
{{- print .Values.apiVersionOverrides.cloudgoogle -}}
{{- else if .Capabilities.APIVersions.Has "cloud.google.com/v1" -}}
{{- print "cloud.google.com/v1" -}}
{{- else -}}
{{- print "cloud.google.com/v1beta1" -}}
{{- end -}}
{{- end -}}

View File

@@ -1,208 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "argo-workflows.controller.fullname" . }}-configmap
namespace: {{ .Release.Namespace | quote }}
labels:
{{- include "argo-workflows.labels" (dict "context" . "component" .Values.controller.name "name" "cm") | nindent 4 }}
data:
config: |
{{- if .Values.controller.instanceID.enabled }}
{{- if .Values.controller.instanceID.useReleaseName }}
instanceID: {{ .Release.Namespace }}
{{- else }}
instanceID: {{ .Values.controller.instanceID.explicitID }}
{{- end }}
{{- end }}
{{- if .Values.controller.parallelism }}
parallelism: {{ .Values.controller.parallelism }}
{{- end }}
{{- if .Values.controller.resourceRateLimit }}
resourceRateLimit: {{ toYaml .Values.controller.resourceRateLimit | nindent 6 }}
{{- end }}
{{- with .Values.controller.namespaceParallelism }}
namespaceParallelism: {{ . }}
{{- end }}
{{- with .Values.controller.initialDelay }}
initialDelay: {{ . }}
{{- end }}
{{- if or .Values.mainContainer.resources .Values.mainContainer.env .Values.mainContainer.envFrom .Values.mainContainer.securityContext}}
mainContainer:
imagePullPolicy: {{ default (.Values.images.pullPolicy) .Values.mainContainer.imagePullPolicy }}
{{- with .Values.mainContainer.resources }}
resources: {{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.mainContainer.env }}
env: {{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.mainContainer.envFrom }}
envFrom: {{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.mainContainer.securityContext }}
securityContext: {{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}
{{- if or .Values.executor.resources .Values.executor.env .Values.executor.args .Values.executor.securityContext}}
executor:
imagePullPolicy: {{ default (.Values.images.pullPolicy) .Values.executor.image.pullPolicy }}
{{- with .Values.executor.resources }}
resources: {{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.executor.args }}
args: {{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.executor.env }}
env: {{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.executor.securityContext }}
securityContext: {{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}
{{- if or .Values.artifactRepository.s3 .Values.artifactRepository.gcs .Values.artifactRepository.azure .Values.customArtifactRepository }}
artifactRepository:
{{- if .Values.artifactRepository.archiveLogs }}
archiveLogs: {{ .Values.artifactRepository.archiveLogs }}
{{- end }}
{{- with .Values.artifactRepository.gcs }}
gcs: {{- tpl (toYaml .) $ | nindent 8 }}
{{- end }}
{{- with .Values.artifactRepository.azure }}
azure: {{- tpl (toYaml .) $ | nindent 8 }}
{{- end }}
{{- if .Values.artifactRepository.s3 }}
s3:
{{- if .Values.useStaticCredentials }}
accessKeySecret:
key: {{ tpl .Values.artifactRepository.s3.accessKeySecret.key . }}
name: {{ tpl .Values.artifactRepository.s3.accessKeySecret.name . }}
secretKeySecret:
key: {{ tpl .Values.artifactRepository.s3.secretKeySecret.key . }}
name: {{ tpl .Values.artifactRepository.s3.secretKeySecret.name . }}
{{- end }}
bucket: {{ tpl (.Values.artifactRepository.s3.bucket | default "") . }}
endpoint: workflow-archivelog-s3.user-system-{{ .Values.global.bfl.username }}:4568
insecure: {{ .Values.artifactRepository.s3.insecure }}
{{- if .Values.artifactRepository.s3.keyFormat }}
keyFormat: {{ .Values.artifactRepository.s3.keyFormat | quote }}
{{- end }}
{{- if .Values.artifactRepository.s3.region }}
region: {{ tpl .Values.artifactRepository.s3.region $ }}
{{- end }}
{{- if .Values.artifactRepository.s3.roleARN }}
roleARN: {{ .Values.artifactRepository.s3.roleARN }}
{{- end }}
{{- if .Values.artifactRepository.s3.useSDKCreds }}
useSDKCreds: {{ .Values.artifactRepository.s3.useSDKCreds }}
{{- end }}
{{- with .Values.artifactRepository.s3.encryptionOptions }}
encryptionOptions:
{{- toYaml . | nindent 10 }}
{{- end }}
{{- end }}
{{- if .Values.customArtifactRepository }}
{{- toYaml .Values.customArtifactRepository | nindent 6 }}
{{- end }}
{{- end }}
{{- if .Values.controller.metricsConfig.enabled }}
metricsConfig:
enabled: {{ .Values.controller.metricsConfig.enabled }}
path: {{ .Values.controller.metricsConfig.path }}
port: {{ .Values.controller.metricsConfig.port }}
{{- if .Values.controller.metricsConfig.metricsTTL }}
metricsTTL: {{ .Values.controller.metricsConfig.metricsTTL }}
{{- end }}
ignoreErrors: {{ .Values.controller.metricsConfig.ignoreErrors }}
secure: {{ .Values.controller.metricsConfig.secure }}
{{- end }}
{{- if .Values.controller.telemetryConfig.enabled }}
telemetryConfig:
enabled: {{ .Values.controller.telemetryConfig.enabled }}
path: {{ .Values.controller.telemetryConfig.path }}
port: {{ .Values.controller.telemetryConfig.port }}
{{- if .Values.controller.telemetryConfig.metricsTTL }}
metricsTTL: {{ .Values.controller.telemetryConfig.metricsTTL }}
{{- end }}
ignoreErrors: {{ .Values.controller.telemetryConfig.ignoreErrors }}
secure: {{ .Values.controller.telemetryConfig.secure }}
{{- end }}
persistence:
connectionPool:
maxIdleConns: 5
maxOpenConns: 0
archive: true
archiveTTL: 5d
postgresql:
host: citus-master-svc.user-system-{{ .Values.global.bfl.username }}
port: 5432
database: user_space_{{ .Values.global.bfl.username }}_argo
tableName: argo_workflows
userNameSecret:
name: rss-secrets
key: pg_user
passwordSecret:
name: rss-secrets
key: pg_password
{{- if .Values.controller.workflowDefaults }}
workflowDefaults:
{{ toYaml .Values.controller.workflowDefaults | indent 6 }}{{- end }}
{{- if .Values.server.sso.enabled }}
sso:
issuer: {{ .Values.server.sso.issuer }}
clientId:
name: {{ .Values.server.sso.clientId.name }}
key: {{ .Values.server.sso.clientId.key }}
clientSecret:
name: {{ .Values.server.sso.clientSecret.name }}
key: {{ .Values.server.sso.clientSecret.key }}
redirectUrl: {{ .Values.server.sso.redirectUrl }}
rbac:
enabled: {{ .Values.server.sso.rbac.enabled }}
{{- with .Values.server.sso.scopes }}
scopes: {{ toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.server.sso.issuerAlias }}
issuerAlias: {{ toYaml . }}
{{- end }}
{{- with .Values.server.sso.sessionExpiry }}
sessionExpiry: {{ toYaml . }}
{{- end }}
{{- with .Values.server.sso.customGroupClaimName }}
customGroupClaimName: {{ toYaml . }}
{{- end }}
{{- with .Values.server.sso.userInfoPath }}
userInfoPath: {{ toYaml . }}
{{- end }}
{{- with .Values.server.sso.insecureSkipVerify }}
insecureSkipVerify: {{ toYaml . }}
{{- end }}
{{- end }}
{{- with .Values.controller.workflowRestrictions }}
workflowRestrictions: {{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.controller.links }}
links: {{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.controller.columns }}
columns: {{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.controller.navColor }}
navColor: {{ . }}
{{- end }}
{{- with .Values.controller.retentionPolicy }}
retentionPolicy: {{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.emissary.images }}
images: {{- toYaml . | nindent 6 }}
{{- end }}
nodeEvents:
enabled: {{ .Values.controller.nodeEvents.enabled }}
{{- with .Values.controller.kubeConfig }}
kubeConfig: {{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.controller.podGCGracePeriodSeconds }}
podGCGracePeriodSeconds: {{ . }}
{{- end }}
{{- with .Values.controller.podGCDeleteDelayDuration }}
podGCDeleteDelayDuration: {{ . }}
{{- end }}

View File

@@ -1,45 +0,0 @@
{{- if .Values.controller.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1
{{- if .Values.singleNamespace }}
kind: RoleBinding
{{ else }}
kind: ClusterRoleBinding
{{- end }}
metadata:
name: {{ .Release.Namespace }}:{{ template "argo-workflows.controller.fullname" . }}
{{- if .Values.singleNamespace }}
namespace: {{ .Release.Namespace | quote }}
{{- end }}
labels:
{{- include "argo-workflows.labels" (dict "context" . "component" .Values.controller.name "name" .Values.controller.name) | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
{{- if .Values.singleNamespace }}
kind: Role
{{ else }}
kind: ClusterRole
{{- end }}
name: {{ template "argo-workflows.controller.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "argo-workflows.controllerServiceAccountName" . }}
namespace: {{ .Release.Namespace | quote }}
{{- if .Values.controller.clusterWorkflowTemplates.enabled }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ .Release.Namespace }}:{{ template "argo-workflows.controller.fullname" . }}-cluster-template
labels:
{{- include "argo-workflows.labels" (dict "context" . "component" .Values.controller.name "name" .Values.controller.name) | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "argo-workflows.controller.fullname" . }}-cluster-template
subjects:
- kind: ServiceAccount
name: {{ template "argo-workflows.controllerServiceAccountName" . }}
namespace: {{ .Release.Namespace | quote }}
{{- end }}
{{- end }}

View File

@@ -1,129 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "argo-workflows.controller.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
{{- include "argo-workflows.labels" (dict "context" . "component" .Values.controller.name "name" .Values.controller.name) | nindent 4 }}
app.kubernetes.io/version: {{ include "argo-workflows.controller_chart_version_label" . }}
{{- with .Values.controller.deploymentAnnotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
replicas: {{ .Values.controller.replicas }}
selector:
matchLabels:
{{- include "argo-workflows.selectorLabels" (dict "context" . "name" .Values.controller.name) | nindent 6 }}
template:
metadata:
labels:
{{- include "argo-workflows.labels" (dict "context" . "component" .Values.controller.name "name" .Values.controller.name) | nindent 8 }}
app.kubernetes.io/version: {{ include "argo-workflows.controller_chart_version_label" . }}
{{- with.Values.controller.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.controller.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
serviceAccountName: {{ template "argo-workflows.controllerServiceAccountName" . }}
{{- with .Values.controller.podSecurityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.controller.extraInitContainers }}
initContainers:
{{- tpl (toYaml .) $ | nindent 8 }}
{{- end }}
containers:
- name: controller
image: "{{- include "argo-workflows.image" (dict "context" . "image" .Values.controller.image) }}:{{ default (include "argo-workflows.defaultTag" .) .Values.controller.image.tag }}"
imagePullPolicy: {{ .Values.images.pullPolicy }}
command: [ "workflow-controller" ]
args:
- "--configmap"
- "{{ template "argo-workflows.controller.fullname" . }}-configmap"
- "--executor-image"
- "{{- include "argo-workflows.image" (dict "context" . "image" .Values.executor.image) }}:{{ default (include "argo-workflows.defaultTag" .) .Values.executor.image.tag }}"
- "--loglevel"
- "{{ .Values.controller.logging.level }}"
- "--gloglevel"
- "{{ .Values.controller.logging.globallevel }}"
- "--log-format"
- "{{ .Values.controller.logging.format }}"
{{- if .Values.singleNamespace }}
- "--namespaced"
{{- end }}
{{- with .Values.controller.workflowWorkers }}
- "--workflow-workers"
- {{ . | quote }}
{{- end }}
{{- with .Values.controller.extraArgs }}
{{- toYaml . | nindent 10 }}
{{- end }}
securityContext:
{{- toYaml .Values.controller.securityContext | nindent 12 }}
env:
- name: ARGO_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: LEADER_ELECTION_IDENTITY
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
{{- with .Values.controller.extraEnv }}
{{- toYaml . | nindent 12 }}
{{- end }}
resources:
{{- toYaml .Values.controller.resources | nindent 12 }}
{{- with .Values.controller.volumeMounts }}
volumeMounts:
{{- toYaml . | nindent 10 }}
{{- end }}
ports:
- name: {{ .Values.controller.metricsConfig.portName }}
containerPort: {{ .Values.controller.metricsConfig.port }}
- containerPort: 6060
livenessProbe: {{ .Values.controller.livenessProbe | toYaml | nindent 12 }}
{{- with .Values.controller.extraContainers }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.images.pullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.controller.volumes }}
volumes:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.controller.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.controller.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.controller.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.controller.topologySpreadConstraints }}
topologySpreadConstraints:
{{- range $constraint := . }}
- {{ toYaml $constraint | nindent 8 | trim }}
{{- if not $constraint.labelSelector }}
labelSelector:
matchLabels:
{{- include "argo-workflows.selectorLabels" (dict "context" $ "name" $.Values.controller.name) | nindent 12 }}
{{- end }}
{{- end }}
{{- end }}
{{- with .Values.controller.priorityClassName }}
priorityClassName: {{ . }}
{{- end }}

View File

@@ -1,16 +0,0 @@
{{- if .Values.controller.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "argo-workflows.controllerServiceAccountName" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
{{- include "argo-workflows.labels" (dict "context" . "component" .Values.controller.name "name" .Values.controller.name) | nindent 4 }}
{{- with .Values.controller.serviceAccount.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{ with .Values.controller.serviceAccount.annotations }}
annotations:
{{- toYaml .| nindent 4 }}
{{- end }}
{{- end }}

View File

@@ -1,15 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ template "argo-workflows.fullname" $ }}-workflow
labels:
{{- include "argo-workflows.labels" (dict "context" $ "component" $.Values.controller.name "name" $.Values.controller.name) | nindent 4 }}
namespace: {{ $.Release.Namespace}}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ template "argo-workflows.fullname" $ }}-workflow
subjects:
- kind: ServiceAccount
name: {{ $.Values.workflow.serviceAccount.name }}
namespace: {{ $.Release.Namespace}}

View File

@@ -1,8 +0,0 @@
{{ range .Values.extraObjects }}
---
{{- if typeIs "string" . }}
{{- tpl . $ }}
{{- else }}
{{- tpl (toYaml .) $ }}
{{- end }}
{{ end }}

View File

@@ -1,45 +0,0 @@
{{- if and .Values.server.enabled .Values.server.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1
{{- if .Values.singleNamespace }}
kind: RoleBinding
{{ else }}
kind: ClusterRoleBinding
{{- end }}
metadata:
name: {{ .Release.Namespace }}:{{ template "argo-workflows.server.fullname" . }}
{{- if .Values.singleNamespace }}
namespace: {{ .Release.Namespace | quote }}
{{- end }}
labels:
{{- include "argo-workflows.labels" (dict "context" . "component" .Values.server.name "name" .Values.server.name) | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
{{- if .Values.singleNamespace }}
kind: Role
{{ else }}
kind: ClusterRole
{{- end }}
name: {{ template "argo-workflows.server.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "argo-workflows.serverServiceAccountName" . }}
namespace: {{ .Release.Namespace | quote }}
{{- if .Values.server.clusterWorkflowTemplates.enabled }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ .Release.Namespace }}:{{ template "argo-workflows.server.fullname" . }}-cluster-template
labels:
{{- include "argo-workflows.labels" (dict "context" . "component" .Values.server.name "name" .Values.server.name) | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "argo-workflows.server.fullname" . }}-cluster-template
subjects:
- kind: ServiceAccount
name: {{ template "argo-workflows.serverServiceAccountName" . }}
namespace: {{ .Release.Namespace | quote }}
{{- end -}}
{{- end -}}

View File

@@ -1,142 +0,0 @@
{{- if .Values.server.enabled -}}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "argo-workflows.server.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
app: argoworkflows
app.kubernetes.io/managed-by: Helm
{{- include "argo-workflows.labels" (dict "context" . "component" .Values.server.name "name" .Values.server.name) | nindent 4 }}
app.kubernetes.io/version: {{ include "argo-workflows.server_chart_version_label" . }}
{{- with .Values.server.deploymentAnnotations }}
annotations:
{{- toYaml . | nindent 4 }}
applications.app.bytetrade.io/icon: https://argoproj.github.io/argo-workflows/assets/logo.png
applications.app.bytetrade.io/title: argoworkflows
applications.app.bytetrade.io/version: '0.35.0'
{{- end }}
spec:
{{- if not .Values.server.autoscaling.enabled }}
replicas: {{ .Values.server.replicas }}
{{- end }}
selector:
matchLabels:
{{- include "argo-workflows.selectorLabels" (dict "context" . "name" .Values.server.name) | nindent 6 }}
app: argoworkflows
template:
metadata:
labels:
app: argoworkflows
{{- include "argo-workflows.labels" (dict "context" . "component" .Values.server.name "name" .Values.server.name) | nindent 8 }}
app.kubernetes.io/version: {{ include "argo-workflows.server_chart_version_label" . }}
{{- with .Values.server.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.server.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
serviceAccountName: {{ template "argo-workflows.serverServiceAccountName" . }}
{{- with .Values.server.podSecurityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.server.extraInitContainers }}
initContainers:
{{- tpl (toYaml .) $ | nindent 8 }}
{{- end }}
containers:
- name: argo-server
image: "{{- include "argo-workflows.image" (dict "context" . "image" .Values.server.image) }}:{{ default (include "argo-workflows.defaultTag" .) .Values.server.image.tag }}"
imagePullPolicy: {{ .Values.images.pullPolicy }}
securityContext:
{{- toYaml .Values.server.securityContext | nindent 12 }}
args:
- server
- --configmap={{ template "argo-workflows.controller.fullname" . }}-configmap
{{- with .Values.server.extraArgs }}
{{- toYaml . | nindent 10 }}
{{- end }}
{{- if .Values.server.authMode }}
- "--auth-mode={{ .Values.server.authMode }}"
{{- end }}
- "--secure={{ .Values.server.secure }}"
- "--x-frame-options="
{{- if .Values.singleNamespace }}
- "--namespaced"
{{- end }}
- "--loglevel"
- "{{ .Values.server.logging.level }}"
- "--gloglevel"
- "{{ .Values.server.logging.globallevel }}"
- "--log-format"
- "{{ .Values.server.logging.format }}"
ports:
- name: web
containerPort: 2746
readinessProbe:
httpGet:
path: /
port: 2746
{{- if .Values.server.secure }}
scheme: HTTPS
{{- else }}
scheme: HTTP
{{- end }}
initialDelaySeconds: 10
periodSeconds: 20
env:
- name: IN_CLUSTER
value: "true"
- name: ARGO_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: BASE_HREF
value: {{ .Values.server.baseHref | quote }}
{{- with .Values.server.extraEnv }}
{{- toYaml . | nindent 12 }}
{{- end }}
resources:
{{- toYaml .Values.server.resources | nindent 12 }}
volumeMounts:
- name: tmp
mountPath: /tmp
volumes:
- name: tmp
emptyDir: {}
{{- with .Values.server.volumes }}
{{- toYaml . | nindent 6}}
{{- end }}
{{- with .Values.server.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.server.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.server.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.server.topologySpreadConstraints }}
topologySpreadConstraints:
{{- range $constraint := . }}
- {{ toYaml $constraint | nindent 8 | trim }}
{{- if not $constraint.labelSelector }}
labelSelector:
matchLabels:
{{- include "argo-workflows.selectorLabels" (dict "context" $ "name" $.Values.server.name) | nindent 12 }}
{{- end }}
{{- end }}
{{- end }}
{{- with .Values.server.priorityClassName }}
priorityClassName: {{ . }}
{{- end }}
{{- end -}}

View File

@@ -1,16 +0,0 @@
{{- if and .Values.server.enabled .Values.server.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "argo-workflows.serverServiceAccountName" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
{{- include "argo-workflows.labels" (dict "context" . "component" .Values.server.name "name" .Values.server.name) | nindent 4 }}
{{- with .Values.server.serviceAccount.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.server.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end -}}

View File

@@ -1,36 +0,0 @@
{{- if .Values.server.enabled -}}
apiVersion: v1
kind: Service
metadata:
name: {{ template "argo-workflows.server.fullname" . }}-svc
namespace: {{ .Release.Namespace | quote }}
labels:
{{- include "argo-workflows.labels" (dict "context" . "component" .Values.server.name "name" .Values.server.name) | nindent 4 }}
app.kubernetes.io/version: {{ include "argo-workflows.server_chart_version_label" . }}
{{- with .Values.server.serviceAnnotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
ports:
- port: {{ .Values.server.servicePort }}
{{- with .Values.server.servicePortName }}
name: {{ . }}
{{- end }}
targetPort: 2746
{{- if and (eq .Values.server.serviceType "NodePort") .Values.server.serviceNodePort }}
nodePort: {{ .Values.server.serviceNodePort }}
{{- end }}
selector:
app: {{ template "argo-workflows.server.fullname" . }}
{{- include "argo-workflows.selectorLabels" (dict "context" . "name" .Values.server.name) | nindent 4 }}
sessionAffinity: None
type: {{ .Values.server.serviceType }}
{{- if and (eq .Values.server.serviceType "LoadBalancer") .Values.server.loadBalancerIP }}
loadBalancerIP: {{ .Values.server.loadBalancerIP | quote }}
{{- end }}
{{- if and (eq .Values.server.serviceType "LoadBalancer") .Values.server.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{- toYaml .Values.server.loadBalancerSourceRanges | nindent 4 }}
{{- end }}
{{- end -}}

View File

@@ -1,840 +0,0 @@
images:
# -- Common tag for Argo Workflows images. Defaults to `.Chart.AppVersion`.
tag: ""
# -- imagePullPolicy to apply to all containers
pullPolicy: IfNotPresent
# -- Secrets with credentials to pull images from a private registry
pullSecrets: []
# - name: argo-pull-secret
## Custom resource configuration
crds:
# -- Install and upgrade CRDs
install: true
# -- Keep CRDs on chart uninstall
keep: true
# -- Annotations to be added to all CRDs
annotations: {}
# -- Create clusterroles that extend existing clusterroles to interact with argo-cd crds
## Ref: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles
createAggregateRoles: true
# -- String to partially override "argo-workflows.fullname" template
nameOverride:
# -- String to fully override "argo-workflows.fullname" template
fullnameOverride:
# -- Override the Kubernetes version, which is used to evaluate certain manifests
kubeVersionOverride: ""
# Override APIVersions
apiVersionOverrides:
# -- String to override apiVersion of autoscaling rendered by this helm chart
autoscaling: "" # autoscaling/v2
# -- String to override apiVersion of GKE resources rendered by this helm chart
cloudgoogle: "" # cloud.google.com/v1
# -- Restrict Argo to operate only in a single namespace (the namespace of the
# Helm release) by apply Roles and RoleBindings instead of the Cluster
# equivalents, and start workflow-controller with the --namespaced flag. Use it
# in clusters with strict access policy.
singleNamespace: false
workflow:
# -- Deprecated; use controller.workflowNamespaces instead.
namespace:
serviceAccount:
# -- Specifies whether a service account should be created
create: false
# -- Labels applied to created service account
labels: {}
# -- Annotations applied to created service account
annotations: {}
# -- Service account which is used to run workflows
name: "argo-workflow"
# -- Secrets with credentials to pull images from a private registry. Same format as `.Values.images.pullSecrets`
pullSecrets: []
rbac:
# -- Adds Role and RoleBinding for the above specified service account to be able to run workflows.
# A Role and Rolebinding pair is also created for each namespace in controller.workflowNamespaces (see below)
create: true
controller:
image:
# -- Registry to use for the controller
registry: quay.io
# -- Registry to use for the controller
repository: argoproj/workflow-controller
# -- Image tag for the workflow controller. Defaults to `.Values.images.tag`.
tag: ""
# -- parallelism dictates how many workflows can be running at the same time
parallelism:
# -- Globally limits the rate at which pods are created.
# This is intended to mitigate flooding of the Kubernetes API server by workflows with a large amount of
# parallel nodes.
resourceRateLimit: {}
# limit: 10
# burst: 1
rbac:
# -- Adds Role and RoleBinding for the controller.
create: true
# -- Allows controller to get, list, and watch certain k8s secrets
secretWhitelist: []
# -- Allows controller to get, list and watch all k8s secrets. Can only be used if secretWhitelist is empty.
accessAllSecrets: false
# -- Allows controller to create and update ConfigMaps. Enables memoization feature
writeConfigMaps: false
# -- Limits the maximum number of incomplete workflows in a namespace
namespaceParallelism:
# -- Resolves ongoing, uncommon AWS EKS bug: https://github.com/argoproj/argo-workflows/pull/4224
initialDelay:
# -- deploymentAnnotations is an optional map of annotations to be applied to the controller Deployment
deploymentAnnotations: {}
# -- podAnnotations is an optional map of annotations to be applied to the controller Pods
podAnnotations: {}
# -- Optional labels to add to the controller pods
podLabels: {}
# -- SecurityContext to set on the controller pods
podSecurityContext: {}
# podPortName: http
metricsConfig:
# -- Enables prometheus metrics server
enabled: false
# -- Path is the path where metrics are emitted. Must start with a "/".
path: /metrics
# -- Port is the port where metrics are emitted
port: 9090
# -- How often custom metrics are cleared from memory
metricsTTL: ""
# -- Flag that instructs prometheus to ignore metric emission errors.
ignoreErrors: false
# -- Flag that use a self-signed cert for TLS
secure: false
# -- Container metrics port name
portName: metrics
# -- Service metrics port
servicePort: 8090
# -- Service metrics port name
servicePortName: metrics
# -- ServiceMonitor relabel configs to apply to samples before scraping
## Ref: https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#relabelconfig
relabelings: []
# -- ServiceMonitor metric relabel configs to apply to samples before ingestion
## Ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#endpoint
metricRelabelings: []
# -- ServiceMonitor will add labels from the service to the Prometheus metric
## Ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#servicemonitorspec
targetLabels: []
# -- the controller container's securityContext
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
# -- enable persistence using postgres
persistence: {}
# connectionPool:
# maxIdleConns: 100
# maxOpenConns: 0
# # save the entire workflow into etcd and DB
# nodeStatusOffLoad: false
# # enable archiving of old workflows
# archive: false
# postgresql:
# host: localhost
# port: 5432
# database: postgres
# tableName: argo_workflows
# # the database secrets must be in the same namespace of the controller
# userNameSecret:
# name: argo-postgres-config
# key: username
# passwordSecret:
# name: argo-postgres-config
# key: password
# -- Default values that will apply to all Workflows from this controller, unless overridden on the Workflow-level.
# Only valid for 2.7+
## See more: https://argoproj.github.io/argo-workflows/default-workflow-specs/
workflowDefaults: {}
# spec:
# ttlStrategy:
# secondsAfterCompletion: 84600
# # Ref: https://argoproj.github.io/argo-workflows/artifact-repository-ref/
# artifactRepositoryRef:
# configMap: my-artifact-repository # default is "artifact-repositories"
# key: v2-s3-artifact-repository # default can be set by the `workflows.argoproj.io/default-artifact-repository` annotation in config map.
# -- Number of workflow workers
workflowWorkers: # 32
# -- Restricts the Workflows that the controller will process.
# Only valid for 2.9+
workflowRestrictions: {}
# templateReferencing: Strict|Secure
# telemetryConfig controls the path and port for prometheus telemetry. Telemetry is enabled and emitted in the same endpoint
# as metrics by default, but can be overridden using this config.
telemetryConfig:
# -- Enables prometheus telemetry server
enabled: false
# -- telemetry path
path: /telemetry
# -- telemetry container port
port: 8081
# -- How often custom metrics are cleared from memory
metricsTTL: ""
# -- Flag that instructs prometheus to ignore metric emission errors.
ignoreErrors: false
# -- Flag that use a self-signed cert for TLS
secure: false
# -- telemetry service port
servicePort: 8081
# -- telemetry service port name
servicePortName: telemetry
serviceMonitor:
# -- Enable a prometheus ServiceMonitor
enabled: false
# -- Prometheus ServiceMonitor labels
additionalLabels: {}
# -- Prometheus ServiceMonitor namespace
namespace: "" # "monitoring"
serviceAccount:
# -- Create a service account for the controller
create: true
# -- Service account name
name: ""
# -- Labels applied to created service account
labels: {}
# -- Annotations applied to created service account
annotations: {}
# -- Workflow controller name string
name: workflow-controller
# -- Specify all namespaces where this workflow controller instance will manage
# workflows. This controls where the service account and RBAC resources will
# be created. Only valid when singleNamespace is false.
workflowNamespaces:
- default
instanceID:
# -- Configures the controller to filter workflow submissions
# to only those which have a matching instanceID attribute.
## NOTE: If `instanceID.enabled` is set to `true` then either `instanceID.userReleaseName`
## or `instanceID.explicitID` must be defined.
enabled: true
# -- Use ReleaseName as instanceID
useReleaseName: true
# useReleaseName: true
# -- Use a custom instanceID
explicitID: ""
# explicitID: unique-argo-controller-identifier
logging:
# -- Set the logging level (one of: `debug`, `info`, `warn`, `error`)
level: info
# -- Set the glog logging level
globallevel: "0"
# -- Set the logging format (one of: `text`, `json`)
format: "text"
# -- Service type of the controller Service
serviceType: ClusterIP
# -- Annotations to be applied to the controller Service
serviceAnnotations: {}
# -- Optional labels to add to the controller Service
serviceLabels: {}
# -- Source ranges to allow access to service from. Only applies to service type `LoadBalancer`
loadBalancerSourceRanges: []
# -- Resource limits and requests for the controller
resources: {}
# -- Configure liveness [probe] for the controller
# @default -- See [values.yaml]
livenessProbe:
httpGet:
port: 6060
path: /healthz
failureThreshold: 3
initialDelaySeconds: 90
periodSeconds: 60
timeoutSeconds: 30
# -- Extra environment variables to provide to the controller container
extraEnv: []
# - name: FOO
# value: "bar"
# -- Extra arguments to be added to the controller
extraArgs: []
# -- Additional volume mounts to the controller main container
volumeMounts: []
# -- Additional volumes to the controller pod
volumes: []
# -- The number of controller pods to run
replicas: 1
pdb:
# -- Configure [Pod Disruption Budget] for the controller pods
enabled: false
# minAvailable: 1
# maxUnavailable: 1
# -- [Node selector]
nodeSelector:
kubernetes.io/os: linux
# -- [Tolerations] for use with node taints
tolerations: []
# -- Assign custom [affinity] rules
affinity: {}
# -- Assign custom [TopologySpreadConstraints] rules to the workflow controller
## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
## If labelSelector is left out, it will default to the labelSelector configuration of the deployment
topologySpreadConstraints: []
# - maxSkew: 1
# topologyKey: topology.kubernetes.io/zone
# whenUnsatisfiable: DoNotSchedule
# -- Leverage a PriorityClass to ensure your pods survive resource shortages.
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
priorityClassName: ""
# -- Configure Argo Server to show custom [links]
## Ref: https://argoproj.github.io/argo-workflows/links/
links: []
# -- Configure Argo Server to show custom [columns]
## Ref: https://github.com/argoproj/argo-workflows/pull/10693
columns: []
# -- Set ui navigation bar background color
navColor: ""
clusterWorkflowTemplates:
# -- Create a ClusterRole and CRB for the controller to access ClusterWorkflowTemplates.
enabled: true
# -- Extra containers to be added to the controller deployment
extraContainers: []
# -- Enables init containers to be added to the controller deployment
extraInitContainers: []
# -- Workflow retention by number of workflows
retentionPolicy: {}
# completed: 10
# failed: 3
# errored: 3
nodeEvents:
# -- Enable to emit events on node completion.
## This can take up a lot of space in k8s (typically etcd) resulting in errors when trying to create new events:
## "Unable to create audit event: etcdserver: mvcc: database space exceeded"
enabled: true
# -- Configure when workflow controller runs in a different k8s cluster with the workflow workloads,
# or needs to communicate with the k8s apiserver using an out-of-cluster kubeconfig secret.
# @default -- `{}` (See [values.yaml])
kubeConfig: {}
# # name of the kubeconfig secret, may not be empty when kubeConfig specified
# secretName: kubeconfig-secret
# # key of the kubeconfig secret, may not be empty when kubeConfig specified
# secretKey: kubeconfig
# # mounting path of the kubeconfig secret, default to /kube/config
# mountPath: /kubeconfig/mount/path
# # volume name when mounting the secret, default to kubeconfig
# volumeName: kube-config-volume
# -- Specifies the duration in seconds before a terminating pod is forcefully killed. A zero value indicates that the pod will be forcefully terminated immediately.
# @default -- `30` seconds (Kubernetes default)
podGCGracePeriodSeconds:
# -- The duration in seconds before the pods in the GC queue get deleted. A zero value indicates that the pods will be deleted immediately.
# @default -- `5s` (Argo Workflows default)
podGCDeleteDelayDuration: ""
# mainContainer adds default config for main container that could be overriden in workflows template
mainContainer:
# -- imagePullPolicy to apply to Workflow main container. Defaults to `.Values.images.pullPolicy`.
imagePullPolicy: ""
# -- Resource limits and requests for the Workflow main container
resources: {}
# -- Adds environment variables for the Workflow main container
env: []
# -- Adds reference environment variables for the Workflow main container
envFrom: []
# -- sets security context for the Workflow main container
securityContext: {}
# executor controls how the init and wait container should be customized
executor:
image:
# -- Registry to use for the Workflow Executors
registry: quay.io
# -- Repository to use for the Workflow Executors
repository: argoproj/argoexec
# -- Image tag for the workflow executor. Defaults to `.Values.images.tag`.
tag: ""
# -- Image PullPolicy to use for the Workflow Executors. Defaults to `.Values.images.pullPolicy`.
pullPolicy: ""
# -- Resource limits and requests for the Workflow Executors
resources: {}
# -- Passes arguments to the executor processes
args: []
# -- Adds environment variables for the executor.
env: []
# -- sets security context for the executor container
securityContext: {}
server:
# -- Deploy the Argo Server
enabled: true
# -- Value for base href in index.html. Used if the server is running behind reverse proxy under subpath different from /.
## only updates base url of resources on client side,
## it's expected that a proxy server rewrites the request URL and gets rid of this prefix
## https://github.com/argoproj/argo-workflows/issues/716#issuecomment-433213190
baseHref: /
image:
# -- Registry to use for the server
registry: quay.io
# -- Repository to use for the server
repository: argoproj/argocli
# -- Image tag for the Argo Workflows server. Defaults to `.Values.images.tag`.
tag: ""
# -- optional map of annotations to be applied to the ui Deployment
deploymentAnnotations: {}
# -- optional map of annotations to be applied to the ui Pods
podAnnotations: {}
# -- Optional labels to add to the UI pods
podLabels: {}
# -- SecurityContext to set on the server pods
podSecurityContext: {}
rbac:
# -- Adds Role and RoleBinding for the server.
create: true
# -- Servers container-level security context
securityContext:
readOnlyRootFilesystem: false
runAsNonRoot: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
# -- Server name string
name: server
# -- Service type for server pods
serviceType: ClusterIP
# -- Service port for server
servicePort: 2746
# -- Service node port
serviceNodePort: # 32746
# -- Service port name
servicePortName: "http" # http
serviceAccount:
# -- Create a service account for the server
create: true
# -- Service account name
name: ""
# -- Labels applied to created service account
labels: {}
# -- Annotations applied to created service account
annotations: {}
# -- Annotations to be applied to the UI Service
serviceAnnotations: {}
# -- Optional labels to add to the UI Service
serviceLabels: {}
# -- Static IP address to assign to loadBalancer service type `LoadBalancer`
loadBalancerIP: ""
# -- Source ranges to allow access to service from. Only applies to service type `LoadBalancer`
loadBalancerSourceRanges: []
# -- Resource limits and requests for the server
resources: {}
# -- The number of server pods to run
replicas: 1
## Argo Server Horizontal Pod Autoscaler
autoscaling:
# -- Enable Horizontal Pod Autoscaler ([HPA]) for the Argo Server
enabled: false
# -- Minimum number of replicas for the Argo Server [HPA]
minReplicas: 1
# -- Maximum number of replicas for the Argo Server [HPA]
maxReplicas: 5
# -- Average CPU utilization percentage for the Argo Server [HPA]
targetCPUUtilizationPercentage: 50
# -- Average memory utilization percentage for the Argo Server [HPA]
targetMemoryUtilizationPercentage: 50
# -- Configures the scaling behavior of the target in both Up and Down directions.
# This is only available on HPA apiVersion `autoscaling/v2beta2` and newer
behavior: {}
# scaleDown:
# stabilizationWindowSeconds: 300
# policies:
# - type: Pods
# value: 1
# periodSeconds: 180
# scaleUp:
# stabilizationWindowSeconds: 300
# policies:
# - type: Pods
# value: 2
pdb:
# -- Configure [Pod Disruption Budget] for the server pods
enabled: false
# minAvailable: 1
# maxUnavailable: 1
# -- [Node selector]
nodeSelector:
kubernetes.io/os: linux
# -- [Tolerations] for use with node taints
tolerations: []
# -- Assign custom [affinity] rules
affinity: {}
# -- Assign custom [TopologySpreadConstraints] rules to the argo server
## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
## If labelSelector is left out, it will default to the labelSelector configuration of the deployment
topologySpreadConstraints: []
# - maxSkew: 1
# topologyKey: topology.kubernetes.io/zone
# whenUnsatisfiable: DoNotSchedule
# -- Leverage a PriorityClass to ensure your pods survive resource shortages
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
priorityClassName: ""
# -- Run the argo server in "secure" mode. Configure this value instead of `--secure` in extraArgs.
## See the following documentation for more details on secure mode:
## https://argoproj.github.io/argo-workflows/tls/
secure: false
# -- Extra environment variables to provide to the argo-server container
extraEnv: []
# - name: FOO
# value: "bar"
# -- Auth Mode is available from `server` , `client` or `sso`. If you chose `sso` , please configure `.Values.server.sso` as well.
## Ref: https://argoproj.github.io/argo-workflows/argo-server-auth-mode/
authMode: "server"
# -- Extra arguments to provide to the Argo server binary.
## Ref: https://argoproj.github.io/argo-workflows/argo-server/#options
extraArgs: []
logging:
# -- Set the logging level (one of: `debug`, `info`, `warn`, `error`)
level: info
# -- Set the glog logging level
globallevel: "0"
# -- Set the logging format (one of: `text`, `json`)
format: "text"
# -- Additional volume mounts to the server main container.
volumeMounts: []
# -- Additional volumes to the server pod.
volumes: []
## Ingress configuration.
# ref: https://kubernetes.io/docs/user-guide/ingress/
ingress:
# -- Enable an ingress resource
enabled: false
# -- Additional ingress annotations
annotations: {}
# -- Additional ingress labels
labels: {}
# -- Defines which ingress controller will implement the resource
ingressClassName: ""
# -- List of ingress hosts
## Hostnames must be provided if Ingress is enabled.
## Secrets must be manually created in the namespace
hosts: []
# - argoworkflows.example.com
# -- List of ingress paths
paths:
- /
# -- Ingress path type. One of `Exact`, `Prefix` or `ImplementationSpecific`
pathType: Prefix
# -- Additional ingress paths
extraPaths: []
# - path: /*
# backend:
# serviceName: ssl-redirect
# servicePort: use-annotation
## for Kubernetes >=1.19 (when "networking.k8s.io/v1" is used)
# - path: /*
# pathType: Prefix
# backend:
# service
# name: ssl-redirect
# port:
# name: use-annotation
# -- Ingress TLS configuration
tls: []
# - secretName: argoworkflows-example-tls
# hosts:
# - argoworkflows.example.com
## Create a Google Backendconfig for use with the GKE Ingress Controller
## https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-configuration#configuring_ingress_features_through_backendconfig_parameters
GKEbackendConfig:
# -- Enable BackendConfig custom resource for Google Kubernetes Engine
enabled: false
# -- [BackendConfigSpec]
spec: {}
# spec:
# iap:
# enabled: true
# oauthclientCredentials:
# secretName: argoworkflows-secret
## Create a Google Managed Certificate for use with the GKE Ingress Controller
## https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs
GKEmanagedCertificate:
# -- Enable ManagedCertificate custom resource for Google Kubernetes Engine.
enabled: false
# -- Domains for the Google Managed Certificate
domains:
- argoworkflows.example.com
## Create a Google FrontendConfig Custom Resource, for use with the GKE Ingress Controller
## https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#configuring_ingress_features_through_frontendconfig_parameters
GKEfrontendConfig:
# -- Enable FrontConfig custom resource for Google Kubernetes Engine
enabled: false
# -- [FrontendConfigSpec]
spec: {}
# spec:
# redirectToHttps:
# enabled: true
# responseCodeName: RESPONSE_CODE
clusterWorkflowTemplates:
# -- Create a ClusterRole and CRB for the server to access ClusterWorkflowTemplates.
enabled: true
# -- Give the server permissions to edit ClusterWorkflowTemplates.
enableEditing: true
# SSO configuration when SSO is specified as a server auth mode.
sso:
# -- Create SSO configuration. If you set `true` , please also set `.Values.server.authMode` as `sso`.
enabled: false
# -- The root URL of the OIDC identity provider
issuer: https://accounts.google.com
clientId:
# -- Name of secret to retrieve the app OIDC client ID
name: argo-server-sso
# -- Key of secret to retrieve the app OIDC client ID
key: client-id
clientSecret:
# -- Name of a secret to retrieve the app OIDC client secret
name: argo-server-sso
# -- Key of a secret to retrieve the app OIDC client secret
key: client-secret
# - The OIDC redirect URL. Should be in the form <argo-root-url>/oauth2/callback.
redirectUrl: https://argo/oauth2/callback
rbac:
# -- Adds ServiceAccount Policy to server (Cluster)Role.
enabled: true
# -- Whitelist to allow server to fetch Secrets
## When present, restricts secrets the server can read to a given list.
## You can use it to restrict the server to only be able to access the
## service account token secrets that are associated with service accounts
## used for authorization.
secretWhitelist: []
# -- Scopes requested from the SSO ID provider
## The 'groups' scope requests group membership information, which is usually used for authorization decisions.
scopes: []
# - groups
# -- Define how long your login is valid for (in hours)
## If omitted, defaults to 10h.
sessionExpiry: ""
# -- Alternate root URLs that can be included for some OIDC providers
issuerAlias: ""
# -- Override claim name for OIDC groups
customGroupClaimName: ""
# -- Specify the user info endpoint that contains the groups claim
## Configure this if your OIDC provider provides groups information only using the user-info endpoint (e.g. Okta)
userInfoPath: ""
# -- Skip TLS verification for the HTTP client
insecureSkipVerify: false
# -- Extra containers to be added to the server deployment
extraContainers: []
# -- Enables init containers to be added to the server deployment
extraInitContainers: []
# -- Array of extra K8s manifests to deploy
extraObjects: []
# - apiVersion: secrets-store.csi.x-k8s.io/v1
# kind: SecretProviderClass
# metadata:
# name: argo-server-sso
# spec:
# provider: aws
# parameters:
# objects: |
# - objectName: "argo/server/sso"
# objectType: "secretsmanager"
# jmesPath:
# - path: "client_id"
# objectAlias: "client_id"
# - path: "client_secret"
# objectAlias: "client_secret"
# secretObjects:
# - data:
# - key: client_id
# objectName: client_id
# - key: client_secret
# objectName: client_secret
# secretName: argo-server-sso-secrets-store
# type: Opaque
# -- Use static credentials for S3 (eg. when not using AWS IRSA)
useStaticCredentials: true
artifactRepository:
# -- Archive the main container logs as an artifact
archiveLogs: true
# -- Store artifact in a S3-compliant object store
# @default -- See [values.yaml]
s3:
# # Note the `key` attribute is not the actual secret, it's the PATH to
# # the contents in the associated secret, as defined by the `name` attribute.
accessKeySecret:
name: argo-workflow-log-fakes3
key: AWS_ACCESS_KEY_ID
secretKeySecret:
name: argo-workflow-log-fakes3
key: AWS_SECRET_ACCESS_KEY
# # insecure will disable TLS. Primarily used for minio installs not configured with TLS
insecure: true
keyFormat: "{{workflow.namespace}}/{{workflow.name}}/{{pod.name}}"
bucket: mongo-backup
# endpoint: workflow-archivelog-s3:4568
# region:
# roleARN:
# useSDKCreds: true
# encryptionOptions:
# enableEncryption: true
# -- Store artifact in a GCS object store
# @default -- `{}` (See [values.yaml])
gcs: {}
# bucket: <project>-argo
# keyFormat: "{{ \"{{workflow.namespace}}/{{workflow.name}}/{{pod.name}}\" }}"
# serviceAccountKeySecret is a secret selector.
# It references the k8s secret named 'my-gcs-credentials'.
# This secret is expected to have have the key 'serviceAccountKey',
# containing the base64 encoded credentials
# to the bucket.
#
# If it's running on GKE and Workload Identity is used,
# serviceAccountKeySecret is not needed.
# serviceAccountKeySecret:
# name: my-gcs-credentials
# key: serviceAccountKey
# -- Store artifact in Azure Blob Storage
# @default -- `{}` (See [values.yaml])
azure: {}
# endpoint: https://mystorageaccountname.blob.core.windows.net
# container: my-container-name
# blobNameFormat: path/in/container
## accountKeySecret is a secret selector.
## It references the k8s secret named 'my-azure-storage-credentials'.
## This secret is expected to have have the key 'account-access-key',
## containing the base64 encoded credentials to the storage account.
## If a managed identity has been assigned to the machines running the
## workflow (e.g., https://docs.microsoft.com/en-us/azure/aks/use-managed-identity)
## then accountKeySecret is not needed, and useSDKCreds should be
## set to true instead:
# useSDKCreds: true
# accountKeySecret:
# name: my-azure-storage-credentials
# key: account-access-key
# -- The section of custom artifact repository.
# Utilize a custom artifact repository that is not one of the current base ones (s3, gcs, azure)
customArtifactRepository: {}
# artifactory:
# repoUrl: https://artifactory.example.com/raw
# usernameSecret:
# name: artifactory-creds
# key: username
# passwordSecret:
# name: artifactory-creds
# key: password
# -- The section of [artifact repository ref](https://argoproj.github.io/argo-workflows/artifact-repository-ref/).
# Each map key is the name of configmap
# @default -- `{}` (See [values.yaml])
artifactRepositoryRef: {}
# # -- 1st ConfigMap
# # If you want to use this config map by default, name it "artifact-repositories".
# # Otherwise, you can provide a reference to a
# # different config map in `artifactRepositoryRef.configMap`.
# artifact-repositories:
# # -- v3.0 and after - if you want to use a specific key, put that key into this annotation.
# annotations:
# workflows.argoproj.io/default-artifact-repository: default-v1-s3-artifact-repository
# # 1st data of configmap. See above artifactRepository or customArtifactRepository.
# default-v1-s3-artifact-repository:
# archiveLogs: false
# s3:
# bucket: my-bucket
# endpoint: minio:9000
# insecure: true
# accessKeySecret:
# name: my-minio-cred
# key: accesskey
# secretKeySecret:
# name: my-minio-cred
# key: secretkey
# # 2nd data
# oss-artifact-repository:
# archiveLogs: false
# oss:
# endpoint: http://oss-cn-zhangjiakou-internal.aliyuncs.com
# bucket: $mybucket
# # accessKeySecret and secretKeySecret are secret selectors.
# # It references the k8s secret named 'bucket-workflow-artifect-credentials'.
# # This secret is expected to have have the keys 'accessKey'
# # and 'secretKey', containing the base64 encoded credentials
# # to the bucket.
# accessKeySecret:
# name: $mybucket-credentials
# key: accessKey
# secretKeySecret:
# name: $mybucket-credentials
# key: secretKey
# # 2nd ConfigMap
# another-artifact-repositories:
# annotations:
# workflows.argoproj.io/default-artifact-repository: gcs
# gcs:
# bucket: my-bucket
# keyFormat: prefix/in/bucket/{{workflow.name}}/{{pod.name}}
# serviceAccountKeySecret:
# name: my-gcs-credentials
# key: serviceAccountKey
emissary:
# -- The command/args for each image on workflow, needed when the command is not specified and the emissary executor is used.
## See more: https://argoproj.github.io/argo-workflows/workflow-executors/#emissary-emissary
images: []
# argoproj/argosay:v2:
# cmd: [/argosay]
# docker/whalesay:latest:
# cmd: [/bin/bash]

View File

@@ -1,174 +1,4 @@
{{- $namespace := printf "%s%s" "user-system-" .Values.bfl.username -}}
{{- $rss_secret := (lookup "v1" "Secret" $namespace "rss-secrets") -}}
{{- $password := "" -}}
{{ if $rss_secret -}}
{{ $password = (index $rss_secret "data" "pg_password") }}
{{ else -}}
{{ $password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $redis_password := "" -}}
{{ if $rss_secret -}}
{{ $redis_password = (index $rss_secret "data" "redis_password") }}
{{ else -}}
{{ $redis_password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $redis_password_data := "" -}}
{{ $redis_password_data = $redis_password | b64dec }}
{{- $pg_password_data := "" -}}
{{ $pg_password_data = $password | b64dec }}
{{- $mongo_secret := (lookup "v1" "Secret" .Release.Namespace "knowledge-mongodb") -}}
{{- $mongo_password := randAlphaNum 16 | b64enc -}}
{{- $mongo_password_data := "" -}}
{{ if $mongo_secret -}}
{{ $mongo_password_data = (index $mongo_secret "data" "mongodb-passwords" ) | b64dec }}
{{ else -}}
{{ $mongo_password_data = $mongo_password | b64dec }}
{{- end -}}
{{- $pg_user := printf "%s%s" "rss_" .Values.bfl.username -}}
{{- $pg_user = $pg_user | b64enc -}}
---
apiVersion: v1
kind: Secret
metadata:
name: rss-secrets
namespace: user-system-{{ .Values.bfl.username }}
type: Opaque
data:
pg_password: {{ $password }}
redis_password: {{ $redis_password }}
---
apiVersion: v1
kind: Secret
metadata:
name: rss-secrets
namespace: {{ .Release.Namespace }}
type: Opaque
data:
pg_user: {{ $pg_user }}
pg_password: {{ $password }}
redis_password: {{ $redis_password }}
---
apiVersion: v1
kind: Secret
metadata:
name: knowledge-mongodb
namespace: {{ .Release.Namespace }}
type: Opaque
{{ if $mongo_secret -}}
data:
mongodb-passwords: {{ index $mongo_secret "data" "mongodb-passwords" }}
{{ else -}}
data:
mongodb-passwords: {{ $mongo_password }}
{{ end }}
---
apiVersion: v1
kind: Secret
metadata:
name: knowledge-mongodb
namespace: user-system-{{ .Values.bfl.username }}
type: Opaque
{{ if $mongo_secret -}}
data:
mongodb-passwords: {{ index $mongo_secret "data" "mongodb-passwords" }}
{{ else -}}
data:
mongodb-passwords: {{ $mongo_password }}
{{ end }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: rss-secrets-auth
namespace: {{ .Release.Namespace }}
data:
redis_password: "{{ $redis_password_data }}"
redis_addr: redis-cluster-proxy.user-system-{{ .Values.bfl.username }}:6379
redis_host: redis-cluster-proxy.user-system-{{ .Values.bfl.username }}
redis_port: '6379'
pg_url: postgres://rss_{{ .Values.bfl.username }}:{{ $pg_password_data }}@citus-master-svc.user-system-{{ .Values.bfl.username }}/user_space_{{ .Values.bfl.username }}_rss_v1?sslmode=disable
mongo_url: mongodb://knowledge-{{ .Values.bfl.username }}:{{ $mongo_password_data }}@mongo-cluster-mongos.user-system-{{ .Values.bfl.username }}:27017/{{ .Release.Namespace }}_knowledge
mongo_db: {{ .Release.Namespace }}_knowledge
postgres_host: citus-master-svc.user-system-{{ .Values.bfl.username }}
postgres_user: knowledge_{{ .Values.bfl.username }}
postgres_password: "{{ $pg_password_data }}"
postgres_db: user_space_{{ .Values.bfl.username }}_knowledge
postgres_port: '5432'
---
apiVersion: v1
kind: ConfigMap
metadata:
name: rss-userspace-data
namespace: {{ .Release.Namespace }}
data:
appData: "{{ .Values.userspace.appData }}"
appCache: "{{ .Values.userspace.appCache }}"
username: "{{ .Values.bfl.username }}"
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: rss-pg
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: rss
appNamespace: {{ .Release.Namespace }}
middleware: postgres
postgreSQL:
user: rss_{{ .Values.bfl.username }}
password:
valueFrom:
secretKeyRef:
key: pg_password
name: rss-secrets
databases:
- name: rss
- name: rss_v1
- name: argo
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: knowledge-redis
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: rss
appNamespace: {{ .Release.Namespace }}
middleware: redis
redis:
password:
valueFrom:
secretKeyRef:
key: redis_password
name: rss-secrets
namespace: knowledge
---
apiVersion: v1
kind: Service
metadata:
@@ -183,3 +13,22 @@ spec:
name: fakes3
port: 4568
targetPort: 4568
---
apiVersion: v1
kind: Service
metadata:
name: knowledge-base-api
namespace: user-system-{{ .Values.bfl.username }}
spec:
type: ClusterIP
selector:
app: systemserver
ports:
- protocol: TCP
name: knowledge-api
port: 3010
targetPort: 3010

View File

@@ -1,4 +1,3 @@
bfl:
nodeport: 30883
nodeport_ingress_http: 30083
@@ -40,4 +39,4 @@ os:
appKey: '${ks[0]}'
appSecret: test
kubesphere:
redis_password: ""
redis_password: ""

View File

@@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -1,24 +0,0 @@
apiVersion: v2
name: recommend
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.0.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"

View File

@@ -1,62 +0,0 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "recommend.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "recommend.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "recommend.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "recommend.labels" -}}
helm.sh/chart: {{ include "recommend.chart" . }}
{{ include "recommend.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "recommend.selectorLabels" -}}
app.kubernetes.io/name: {{ include "recommend.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "recommend.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "recommend.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@@ -1,117 +0,0 @@
---
apiVersion: v1
kind: Service
metadata:
name: recommend
namespace: {{ .Release.Namespace }}
spec:
type: ExternalName
externalName: argoworkflows-svc.{{ .Release.Namespace }}.svc.cluster.local
ports:
- name: http
port: 2746
protocol: TCP
targetPort: 2746
---
apiVersion: v1
kind: Service
metadata:
name: argoworkflows-ui
namespace: {{ .Release.Namespace }}
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: recommend
type: ClusterIP
---
{{ if (eq .Values.debugVersion true) }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: recommend
namespace: {{ .Release.Namespace }}
labels:
app: recommend
applications.app.bytetrade.io/author: bytetrade.io
applications.app.bytetrade.io/name: recommend
applications.app.bytetrade.io/owner: '{{ .Values.bfl.username }}'
annotations:
applications.app.bytetrade.io/icon: https://file.bttcdn.com/appstore/recommend/icon.png
applications.app.bytetrade.io/title: recommend
applications.app.bytetrade.io/version: '0.0.1'
applications.app.bytetrade.io/entrances: '[{"name":"recommend", "host":"argoworkflows-ui", "port":80,"title":"recommend"}]'
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: recommend
template:
metadata:
labels:
app: recommend
io.bytetrade.app: "true"
spec:
containers:
- name: recommend-proxy
image: nginx:stable-alpine3.17-slim
imagePullPolicy: IfNotPresent
ports:
- name: proxy
containerPort: 8080
volumeMounts:
- name: nginx-config
readOnly: true
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: nginx-config
configMap:
name: recommend-nginx-configs
items:
- key: nginx.conf
path: nginx.conf
{{ end }}
---
apiVersion: v1
data:
nginx.conf: |
# Configuration checksum:
pid /var/run/nginx.pid;
worker_processes auto;
events {
worker_connections 1024;
}
http {
server {
listen 8080;
location / {
proxy_pass http://recommend:2746;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
kind: ConfigMap
metadata:
name: recommend-nginx-configs
namespace: {{ .Release.Namespace }}

View File

@@ -23,6 +23,7 @@ spec:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
priorityClassName: "system-cluster-critical"
initContainers:
- args:
- -it
@@ -65,7 +66,7 @@ spec:
containers:
- name: edge-desktop
image: beclab/desktop:v0.2.46
image: beclab/desktop:v0.2.59
imagePullPolicy: IfNotPresent
securityContext:
runAsNonRoot: false
@@ -77,7 +78,7 @@ spec:
value: http://bfl.{{ .Release.Namespace }}:8080
- name: desktop-server
image: beclab/desktop-server:v0.2.46
image: beclab/desktop-server:v0.2.59
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
@@ -139,7 +140,7 @@ spec:
fieldRef:
fieldPath: status.podIP
- name: terminus-ws-sidecar
image: 'beclab/ws-gateway:v1.0.3'
image: 'beclab/ws-gateway:v1.0.5'
imagePullPolicy: IfNotPresent
command:
- /ws-gateway
@@ -155,7 +156,7 @@ spec:
- name: userspace-dir
hostPath:
type: Directory
path: {{ .Values.userspace.userData }}
path: '{{ .Values.userspace.userData }}'
- name: terminus-sidecar-config
configMap:
name: sidecar-ws-configs
@@ -449,6 +450,7 @@ data:
- prefix: x-unauth-
- exact: x-authorization
- exact: x-bfl-user
- exact: x-real-ip
- exact: terminus-nonce
headers_to_add:
- key: X-Forwarded-Method
@@ -516,9 +518,11 @@ data:
clusters:
- name: original_dst
connect_timeout: 5000s
connect_timeout: 120s
type: ORIGINAL_DST
lb_policy: CLUSTER_PROVIDED
common_http_protocol_options:
idle_timeout: 10s
- name: authelia
connect_timeout: 2s
type: LOGICAL_DNS
@@ -623,6 +627,7 @@ data:
- prefix: x-unauth-
- exact: x-authorization
- exact: x-bfl-user
- exact: x-real-ip
- exact: terminus-nonce
headers_to_add:
- key: X-Forwarded-Method
@@ -691,6 +696,8 @@ data:
connect_timeout: 5000s
type: ORIGINAL_DST
lb_policy: CLUSTER_PROVIDED
common_http_protocol_options:
idle_timeout: 10s
- name: ws_original_dst
connect_timeout: 5000s
type: LOGICAL_DNS

View File

@@ -1,4 +1,3 @@
bfl:
username: 'test'
url: 'test'

View File

@@ -1,3 +0,0 @@
# vault
https://github.com/beclab/analytic

View File

@@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -1,26 +0,0 @@
apiVersion: v2
name: download
description: A Helm chart for Kubernetes
maintainers:
- name: bytetrade
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.0.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"

View File

@@ -1,319 +0,0 @@
{{- $namespace := printf "%s%s" "user-system-" .Values.bfl.username -}}
{{- $download_secret := (lookup "v1" "Secret" $namespace "rss-secrets") -}}
{{- $pg_password := "" -}}
{{ if $download_secret -}}
{{ $pg_password = (index $download_secret "data" "pg_password") }}
{{ else -}}
{{ $pg_password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $redis_password := "" -}}
{{ if $download_secret -}}
{{ $redis_password = (index $download_secret "data" "redis_password") }}
{{ else -}}
{{ $redis_password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $download_nats_secret := (lookup "v1" "Secret" $namespace "download-secrets") -}}
{{- $nat_password := "" -}}
{{ if $download_nats_secret -}}
{{ $nat_password = (index $download_nats_secret "data" "nat_password") }}
{{ else -}}
{{ $nat_password = randAlphaNum 16 | b64enc }}
{{- end -}}
---
apiVersion: v1
kind: Secret
metadata:
name: download-secrets
namespace: user-system-{{ .Values.bfl.username }}
type: Opaque
data:
pg_password: {{ $pg_password }}
redis_password: {{ $redis_password }}
nat_password: {{ $nat_password }}
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: download-pg
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: download
appNamespace: {{ .Release.Namespace }}
middleware: postgres
postgreSQL:
user: knowledge_{{ .Values.bfl.username }}
password:
valueFrom:
secretKeyRef:
key: pg_password
name: download-secrets
databases:
- name: knowledge
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: download-nat
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: download
appNamespace: {{ .Release.Namespace }}
middleware: nats
nats:
password:
valueFrom:
secretKeyRef:
key: nat_password
name: download-secrets
refs: []
subjects:
- name: download_status
permission:
pub: allow
sub: allow
export:
- appName: knowledge
sub: allow
pub: allow
user: user-system-{{ .Values.bfl.username }}-download
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: download
namespace: {{ .Release.Namespace }}
labels:
app: download
applications.app.bytetrade.io/author: bytetrade.io
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: download
template:
metadata:
labels:
app: download
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
initContainers:
- name: init-data
image: busybox:1.28
securityContext:
privileged: true
runAsNonRoot: false
runAsUser: 0
volumeMounts:
- name: config-dir
mountPath: /config
- name: download-dir
mountPath: /downloads
command:
- sh
- -c
- |
chown -R 1000:1000 /config && \
chown -R 1000:1000 /downloads
- name: init-container
image: 'postgres:16.0-alpine3.18'
command:
- sh
- '-c'
- >-
echo -e "Checking for the availability of PostgreSQL Server deployment"; until psql -h $PGHOST -p $PGPORT -U $PGUSER -d $PGDB -c "SELECT 1"; do sleep 1; printf "-"; done; sleep 5; echo -e " >> PostgreSQL DB Server has started";
env:
- name: PGHOST
value: citus-master-svc.user-system-{{ .Values.bfl.username }}
- name: PGPORT
value: "5432"
- name: PGUSER
value: knowledge_{{ .Values.bfl.username }}
- name: PGPASSWORD
value: {{ $pg_password | b64dec }}
- name: PGDB
value: user_space_{{ .Values.bfl.username }}_knowledge
containers:
- name: aria2
image: "beclab/aria2:v0.0.4"
imagePullPolicy: IfNotPresent
securityContext:
runAsNonRoot: false
runAsUser: 0
ports:
- containerPort: 6800
- containerPort: 6888
env:
- name: RPC_SECRET
value: kubespider
- name: PUID
value: "1000"
- name: PGID
value: "1000"
volumeMounts:
- name: download-dir
mountPath: /downloads
resources:
requests:
cpu: 20m
memory: 50Mi
limits:
cpu: "1"
memory: 300Mi
- name: yt-dlp
image: "beclab/yt-dlp:v0.0.17"
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
ports:
- containerPort: 3082
env:
- name: PG_USERNAME
value: knowledge_{{ .Values.bfl.username }}
- name: PG_PASSWORD
value: {{ $pg_password | b64dec }}
- name: PG_HOST
value: citus-master-svc.user-system-{{ .Values.bfl.username }}
- name: PG_PORT
value: "5432"
- name: PG_DATABASE
value: user_space_{{ .Values.bfl.username }}_knowledge
- name: SETTING_URL
value: http://system-server.user-system-{{ .Values.bfl.username }}/legacy/v1alpha1/service.settings/v1/api/cookie/retrieve
- name: REDIS_HOST
value: redis-cluster-proxy.user-system-{{ .Values.bfl.username }}
- name: REDIS_PASSWORD
value: {{ $redis_password | b64dec }}
- name: NATS_HOST
value: nats.user-system-{{ .Values.bfl.username }}
- name: NATS_PORT
value: "4222"
- name: NATS_USERNAME
value: user-system-{{ .Values.bfl.username }}-download
- name: NATS_PASSWORD
value: {{ $nat_password | b64dec }}
- name: NATS_SUBJECT
value: "terminus.{{ .Release.Namespace }}.download_status"
volumeMounts:
- name: config-dir
mountPath: /app/config
- name: download-dir
mountPath: /app/downloads
resources:
requests:
cpu: 20m
memory: 50Mi
limits:
cpu: "1"
memory: 300Mi
- name: download-spider
image: "beclab/download-spider:v0.0.16"
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
env:
- name: PG_USERNAME
value: knowledge_{{ .Values.bfl.username }}
- name: PG_PASSWORD
value: {{ $pg_password | b64dec }}
- name: PG_HOST
value: citus-master-svc.user-system-{{ .Values.bfl.username }}
- name: PG_PORT
value: "5432"
- name: PG_DATABASE
value: user_space_{{ .Values.bfl.username }}_knowledge
- name: REDIS_HOST
value: redis-cluster-proxy.user-system-{{ .Values.bfl.username }}
- name: REDIS_PASSWORD
value: {{ $redis_password | b64dec }}
- name: NATS_HOST
value: nats.user-system-{{ .Values.bfl.username }}
- name: NATS_PORT
value: "4222"
- name: NATS_USERNAME
value: user-system-{{ .Values.bfl.username }}-download
- name: NATS_PASSWORD
value: {{ $nat_password | b64dec }}
- name: NATS_SUBJECT
value: "terminus.{{ .Release.Namespace }}.download_status"
volumeMounts:
- name: download-dir
mountPath: /downloads
ports:
- containerPort: 3080
resources:
requests:
cpu: 20m
memory: 50Mi
limits:
cpu: "1"
memory: 300Mi
volumes:
- name: config-dir
hostPath:
type: DirectoryOrCreate
path: {{ .Values.userspace.appData}}/Downloads/config
- name: download-dir
hostPath:
type: DirectoryOrCreate
path: {{ .Values.userspace.userData }}
---
apiVersion: v1
kind: Service
metadata:
name: download-svc
namespace: {{ .Release.Namespace }}
spec:
type: ClusterIP
selector:
app: download
ports:
- name: "download-spider"
protocol: TCP
port: 3080
targetPort: 3080
- name: "aria2-server"
protocol: TCP
port: 6800
targetPort: 6800
- name: ytdlp-server
protocol: TCP
port: 3082
targetPort: 3082
---
apiVersion: v1
kind: Service
metadata:
name: download-api
namespace: user-system-{{ .Values.bfl.username }}
spec:
type: ClusterIP
selector:
app: systemserver
ports:
- protocol: TCP
name: download-api
port: 3080
targetPort: 3080

View File

@@ -1,11 +1,12 @@
{{- $namespace := printf "%s" "os-system" -}}
{{- $files_secret := (lookup "v1" "Secret" $namespace "files-secrets") -}}
{{- $password := "" -}}
{{- $files_postgres_password := "" -}}
{{ if $files_secret -}}
{{ $password = (index $files_secret "data" "password") }}
{{ $files_postgres_password = (index $files_secret "data" "files_postgres_password") }}
{{ else -}}
{{ $password = randAlphaNum 16 | b64enc }}
{{ $files_postgres_password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $files_redis_password := "" -}}
@@ -15,6 +16,14 @@
{{ $files_redis_password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $files_nats_secret := (lookup "v1" "Secret" "os-system" "files-nats-secrets") -}}
{{- $files_nats_password := "" -}}
{{ if $files_nats_secret -}}
{{ $files_nats_password = (index $files_nats_secret "data" "files_nats_password") }}
{{ else -}}
{{ $files_nats_password = randAlphaNum 16 | b64enc }}
{{- end -}}
---
apiVersion: apps/v1
kind: Deployment
@@ -33,13 +42,18 @@ spec:
metadata:
labels:
app: files
annotations:
instrumentation.opentelemetry.io/inject-nginx: "olares-instrumentation"
instrumentation.opentelemetry.io/inject-nginx-container-names: "nginx"
instrumentation.opentelemetry.io/inject-go: "olares-instrumentation"
instrumentation.opentelemetry.io/go-container-names: "gateway,files,uploader"
instrumentation.opentelemetry.io/otel-go-auto-target-exe: "/filebrowser"
spec:
serviceAccount: os-internal
serviceAccountName: os-internal
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
runAsUser: 0
runAsNonRoot: false
initContainers:
- name: init-data
image: busybox:1.28
@@ -59,18 +73,40 @@ spec:
- -c
- |
chown -R 1000:1000 /appdata; chown -R 1000:1000 /appcache; chown -R 1000:1000 /data
- name: init-container
image: 'postgres:16.0-alpine3.18'
command:
- sh
- '-c'
- >-
echo -e "Checking for the availability of PostgreSQL Server
deployment"; until psql -h $PGHOST -p $PGPORT -U $PGUSER -d $PGDB1
-c "SELECT 1"; do sleep 1; printf "-"; done; sleep 5; echo -e " >>
PostgreSQL DB Server has started";
env:
- name: PGHOST
value: citus-headless.os-system
- name: PGPORT
value: '5432'
- name: PGUSER
value: files_os_system
- name: PGPASSWORD
value: {{ $files_postgres_password | b64dec }}
- name: PGDB1
value: os_system_files
containers:
- name: gateway
image: beclab/appdata-gateway:0.1.15
image: beclab/appdata-gateway:0.1.18
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
runAsUser: 0
ports:
- containerPort: 8080
env:
- name: FILES_SERVER_TAG
value: 'beclab/files-server:v0.2.46'
value: 'beclab/files-server:v0.2.69'
- name: NAMESPACE
valueFrom:
fieldRef:
@@ -88,6 +124,10 @@ spec:
value: seafile
image: beclab/media-server:v0.1.10
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: true
runAsUser: 0
privileged: true
ports:
- containerPort: 9090
volumeMounts:
@@ -98,14 +138,15 @@ spec:
{{ if .Values.sharedlib }}
- name: shared-lib
mountPath: /data/External
mountPropagation: Bidirectional
{{ end }}
- name: files
image: beclab/files-server:v0.2.46
image: beclab/files-server:v0.2.69
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: true
runAsUser: 1000
runAsUser: 0
privileged: true
volumeMounts:
- name: fb-data
@@ -157,7 +198,7 @@ spec:
# - name: ZINC_USER
# value: zincuser-files-os-system
# - name: ZINC_PASSWORD
# value: {{ $password | b64dec }}
# value: {{ $files_postgres_password | b64dec }}
# - name: ZINC_HOST
# value: zinc-server-svc.os-system
# - name: ZINC_PORT
@@ -191,6 +232,32 @@ spec:
# use redis db 0 for redis cache
- name: REDIS_DB
value: '0'
- name: NATS_HOST
value: nats
- name: NATS_PORT
value: '4222'
- name: NATS_USERNAME
value: os-system-files-server
- name: NATS_PASSWORD
value: {{ $files_nats_password | b64dec }}
- name: NATS_SUBJECT
value: terminus.os-system.files-notify
- name: RESERVED_SPACE
value: '1000'
- name: OLARES_VERSION
value: '1.12'
- name: FILE_CACHE_DIR
value: '/data/file_cache'
- name: PGHOST
value: citus-headless.os-system
- name: PGPORT
value: '5432'
- name: PGUSER
value: files_os_system
- name: PGPASSWORD
value: {{ $files_postgres_password | b64dec }}
- name: PGDB1
value: os_system_files
- name: POD_NAME
valueFrom:
fieldRef:
@@ -207,12 +274,14 @@ spec:
- /filebrowser
- --noauth
- name: uploader
image: beclab/upload:v1.0.7
image: beclab/upload:v1.0.14
env:
- name: UPLOAD_FILE_TYPE
value: '*'
- name: UPLOAD_LIMITED_SIZE
value: '21474836481'
value: '118111600640'
- name: RESERVED_SPACE
value: '1000'
volumeMounts:
- name: fb-data
mountPath: /appdata
@@ -223,13 +292,18 @@ spec:
{{ if .Values.sharedlib }}
- name: shared-lib
mountPath: /data/External
mountPropagation: Bidirectional
{{ end }}
resources: { }
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: true
runAsUser: 0
privileged: true
- name: nginx
image: 'beclab/nginx-lua:n0.0.4'
image: 'beclab/docker-nginx-headers-more:ubuntu-v0.1.0'
securityContext:
runAsNonRoot: false
runAsUser: 0
@@ -237,6 +311,10 @@ spec:
- containerPort: 80
protocol: TCP
volumeMounts:
- name: files-nginx-config
readOnly: true
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
- name: files-nginx-config
mountPath: /etc/nginx/conf.d/default.conf
subPath: default.conf
@@ -248,31 +326,33 @@ spec:
- name: userspace-dir
hostPath:
type: Directory
path: {{ .Values.rootPath }}/rootfs/userspace
path: '{{ .Values.rootPath }}/rootfs/userspace'
- name: fb-data
hostPath:
type: DirectoryOrCreate
path: {{ .Values.rootPath }}/userdata/Cache/files
path: '{{ .Values.rootPath }}/userdata/Cache/files'
- name: upload-appdata
hostPath:
path: {{ .Values.rootPath }}/userdata/Cache
path: '{{ .Values.rootPath }}/userdata/Cache'
type: DirectoryOrCreate
- name: files-nginx-config
configMap:
name: files-nginx-config
items:
- key: nginx.conf
path: nginx.conf
- key: default.conf
path: default.conf
defaultMode: 420
- name: user-appdata-dir
hostPath:
path: {{ .Values.rootPath }}/userdata/Cache
path: '{{ .Values.rootPath }}/userdata/Cache'
type: Directory
{{ if .Values.sharedlib }}
- name: shared-lib
hostPath:
path: {{ .Values.sharedlib }}
path: "{{ .Values.sharedlib }}"
type: Directory
{{ end }}
@@ -345,14 +425,21 @@ spec:
- sh
- -c
- |
chown -R 1000:1000 /appdata
chown -R 1000:1000 /appdata
- args:
- -it
- nats.os-system:4222
image: owncloudci/wait-for:latest
imagePullPolicy: IfNotPresent
name: check-nats
containers:
- name: files
image: beclab/files-server:v0.2.46
image: beclab/files-server:v0.2.69
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
allowPrivilegeEscalation: true
runAsUser: 0
runAsNonRoot: false
volumeMounts:
- name: fb-data
mountPath: /appdata
@@ -361,12 +448,16 @@ spec:
ports:
- containerPort: 8110
env:
- name: FB_DATABASE
value: /appdata/database/filebrowser.db
- name: FB_CONFIG
value: /appdata/config/settings.json
- name: FB_ROOT
- name: ROOT_PREFIX
value: /data
# - name: FB_DATABASE
# value: /appdata/database/filebrowser.db
# - name: FB_CONFIG
# value: /appdata/config/settings.json
# - name: FB_ROOT
# value: /data
- name: OLARES_VERSION
value: '1.12'
- name: NODE_NAME
valueFrom:
fieldRef:
@@ -378,11 +469,11 @@ spec:
- name: user-appdata-dir
hostPath:
type: Directory
path: {{ .Values.rootPath }}/userdata/Cache
path: '{{ .Values.rootPath }}/userdata/Cache'
- name: fb-data
hostPath:
type: DirectoryOrCreate
path: {{ .Values.rootPath }}/userdata/Cache/files-appdata
path: '{{ .Values.rootPath }}/userdata/Cache/files-appdata'
---
apiVersion: v1
@@ -409,9 +500,39 @@ metadata:
namespace: os-system
type: Opaque
data:
password: {{ $password }}
files_postgres_password: {{ $files_postgres_password }}
files_redis_password: {{ $files_redis_password }}
---
apiVersion: v1
kind: Secret
metadata:
name: files-nats-secrets
namespace: os-system
data:
files_nats_password: {{ $files_nats_password }}
type: Opaque
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: files-pg
namespace: os-system
spec:
app: files
appNamespace: os-system
middleware: postgres
postgreSQL:
user: files_os_system
password:
valueFrom:
secretKeyRef:
key: files_postgres_password
name: files-secrets
databases:
- name: files
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
@@ -430,6 +551,37 @@ spec:
name: files-secrets
namespace: files-redis
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: files-server-nat
namespace: os-system
spec:
app: files-server
appNamespace: os-system
middleware: nats
nats:
password:
valueFrom:
secretKeyRef:
key: files_nats_password
name: files-nats-secrets
refs: []
subjects:
- export:
- appName: files-frontend
pub: allow
sub: allow
- appName: vault
pub: allow
sub: allow
name: files-notify
permission:
pub: allow
sub: allow
user: os-system-files-server
---
kind: ConfigMap
apiVersion: v1
@@ -439,6 +591,37 @@ metadata:
annotations:
kubesphere.io/creator: bytetrade.io
data:
nginx.conf: |-
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 2700;
#gzip on;
client_max_body_size 4000M;
include /etc/nginx/conf.d/*.conf;
}
default.conf: |-
server {
listen 80 default_server;
@@ -488,12 +671,12 @@ data:
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
client_body_timeout 60s;
client_body_timeout 600s;
client_max_body_size 2000M;
proxy_request_buffering off;
keepalive_timeout 75s;
proxy_read_timeout 60s;
proxy_send_timeout 60s;
keepalive_timeout 750s;
proxy_read_timeout 600s;
proxy_send_timeout 600s;
}
location /api/raw/AppData {
@@ -505,12 +688,77 @@ data:
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
client_body_timeout 60s;
client_max_body_size 2000M;
client_body_timeout 1800s;
client_max_body_size 4000M;
proxy_request_buffering off;
keepalive_timeout 75s;
proxy_read_timeout 60s;
proxy_send_timeout 60s;
keepalive_timeout 2700s;
proxy_read_timeout 1800s;
proxy_send_timeout 1800s;
}
location /api/raw {
proxy_pass http://127.0.0.1:8080;
# rewrite ^/server(.*)$ $1 break;
# Add original-request-related headers
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
client_body_timeout 1800s;
client_max_body_size 4000M;
proxy_request_buffering off;
keepalive_timeout 2700s;
proxy_read_timeout 1800s;
proxy_send_timeout 1800s;
}
location /api/md5 {
proxy_pass http://127.0.0.1:8080;
# rewrite ^/server(.*)$ $1 break;
# Add original-request-related headers
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
add_header Accept-Ranges bytes;
client_body_timeout 1800s;
client_max_body_size 4000M;
proxy_request_buffering off;
keepalive_timeout 2700s;
proxy_read_timeout 1800s;
proxy_send_timeout 1800s;
}
location /api/paste {
proxy_pass http://127.0.0.1:8080;
# rewrite ^/server(.*)$ $1 break;
# Add original-request-related headers
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
add_header Accept-Ranges bytes;
client_body_timeout 1800s;
client_max_body_size 4000M;
proxy_request_buffering off;
keepalive_timeout 2700s;
proxy_read_timeout 1800s;
proxy_send_timeout 1800s;
}
location /api/cache {
proxy_pass http://127.0.0.1:8080;
# rewrite ^/server(.*)$ $1 break;
# Add original-request-related headers
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
add_header Accept-Ranges bytes;
client_body_timeout 1800s;
client_max_body_size 4000M;
proxy_request_buffering off;
keepalive_timeout 2700s;
proxy_read_timeout 1800s;
proxy_send_timeout 1800s;
}
location /provider {
@@ -562,7 +810,7 @@ data:
client_body_timeout 600s;
client_max_body_size 4000M;
proxy_request_buffering off;
proxy_request_buffering on;
keepalive_timeout 750s;
proxy_read_timeout 600s;
proxy_send_timeout 600s;
@@ -598,12 +846,12 @@ data:
add_header Accept-Ranges bytes;
client_body_timeout 60s;
client_body_timeout 600s;
client_max_body_size 2000M;
proxy_request_buffering off;
keepalive_timeout 75s;
proxy_read_timeout 60s;
proxy_send_timeout 60s;
keepalive_timeout 750s;
proxy_read_timeout 600s;
proxy_send_timeout 600s;
}
location /seafhttp/ {
@@ -617,12 +865,12 @@ data:
add_header Accept-Ranges bytes;
client_body_timeout 60s;
client_body_timeout 600s;
client_max_body_size 2000M;
proxy_request_buffering off;
keepalive_timeout 75s;
proxy_read_timeout 60s;
proxy_send_timeout 60s;
keepalive_timeout 750s;
proxy_read_timeout 600s;
proxy_send_timeout 600s;
}
# files
# for all routes matching a dot, check for files and return 404 if not found

View File

@@ -27,6 +27,14 @@
{{ $pg_password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $files_frontend_nats_secret := (lookup "v1" "Secret" $namespace "files-frontend-nats-secrets") -}}
{{- $files_frontend_nats_password := "" -}}
{{ if $files_frontend_nats_secret -}}
{{ $files_frontend_nats_password = (index $files_frontend_nats_secret "data" "files_frontend_nats_password") }}
{{ else -}}
{{ $files_frontend_nats_password = randAlphaNum 16 | b64enc }}
{{- end -}}
---
apiVersion: v1
@@ -104,6 +112,13 @@ spec:
labels:
app: files
io.bytetrade.app: "true"
annotations:
# support nginx 1.24.3 1.25.3
instrumentation.opentelemetry.io/inject-nginx: "olares-instrumentation"
instrumentation.opentelemetry.io/inject-nginx-container-names: "files-frontend"
instrumentation.opentelemetry.io/inject-go: "olares-instrumentation"
instrumentation.opentelemetry.io/go-container-names: "driver-server"
instrumentation.opentelemetry.io/otel-go-auto-target-exe: "drive"
spec:
serviceAccountName: bytetrade-controller
securityContext:
@@ -134,6 +149,12 @@ spec:
image: owncloudci/wait-for:latest
imagePullPolicy: IfNotPresent
name: check-auth
- args:
- -it
- nats.user-system-{{ .Values.bfl.username }}:4222
image: owncloudci/wait-for:latest
imagePullPolicy: IfNotPresent
name: check-nats
- name: terminus-sidecar-init
image: openservicemesh/init:v1.2.3
imagePullPolicy: IfNotPresent
@@ -185,6 +206,20 @@ spec:
value: "{{ $pg_password | b64dec }}"
- name: PGDB
value: user_space_{{ .Values.bfl.username }}_cloud_drive_integration
- name: files-frontend-init
image: beclab/files-frontend:v1.3.61
imagePullPolicy: IfNotPresent
volumeMounts:
- name: app
mountPath: /cp_app
- name: nginx-confd
mountPath: /confd
command:
- sh
- -c
- |
cp -rf /app/* /cp_app/. && cp -rf /etc/nginx/conf.d/* /confd/.
containers:
# - name: gateway
# image: beclab/appdata-gateway:0.1.12
@@ -283,18 +318,33 @@ spec:
# - /filebrowser
# - --noauth
- name: files-frontend
image: beclab/files-frontend:v1.3.9
image: beclab/docker-nginx-headers-more:ubuntu-v0.1.0
imagePullPolicy: IfNotPresent
securityContext:
runAsNonRoot: false
runAsUser: 0
ports:
- containerPort: 80
env:
- name: NATS_HOST
value: nats.user-system-{{ .Values.bfl.username }}
- name: NATS_PORT
value: '4222'
- name: NATS_USERNAME
value: user-system-{{ .Values.bfl.username }}-files-frontend
- name: NATS_PASSWORD
value: {{ $files_frontend_nats_password | b64dec }}
- name: NATS_SUBJECT
value: terminus.os-system.files-notify
volumeMounts:
- name: userspace-dir
mountPath: /data
- name: app
mountPath: /app
- name: nginx-confd
mountPath: /etc/nginx/conf.d
- name: drive-server
image: beclab/drive:v0.0.29
image: beclab/drive:v0.0.72
imagePullPolicy: IfNotPresent
env:
- name: OS_SYSTEM_SERVER
@@ -314,8 +364,10 @@ spec:
mountPath: /appdata/
- name: userspace-app-dir
mountPath: /data/Application
- name: data-dir
mountPath: /data
- name: task-executor
image: beclab/driveexecutor:v0.0.29
image: beclab/driveexecutor:v0.0.72
imagePullPolicy: IfNotPresent
env:
- name: OS_SYSTEM_SERVER
@@ -335,6 +387,8 @@ spec:
mountPath: /appdata/
- name: userspace-app-dir
mountPath: /data/Application
- name: data-dir
mountPath: /data
# - name: terminus-upload-sidecar
# image: beclab/upload:v1.0.3
# env:
@@ -397,40 +451,48 @@ spec:
fieldPath: status.podIP
volumes:
- name: data-dir
hostPath:
path: '{{ .Values.rootPath }}/rootfs/userspace'
type: Directory
- name: watch-dir
hostPath:
type: Directory
path: {{ .Values.userspace.userData }}/Documents
path: '{{ .Values.userspace.userData }}/Documents'
- name: userspace-dir
hostPath:
type: Directory
path: {{ .Values.userspace.userData }}
path: '{{ .Values.userspace.userData }}'
- name: userspace-app-dir
hostPath:
type: Directory
path: {{ .Values.userspace.appData }}
path: '{{ .Values.userspace.appData }}'
- name: fb-data
hostPath:
type: DirectoryOrCreate
path: {{ .Values.userspace.appCache}}/files
path: '{{ .Values.userspace.appCache}}/files'
- name: upload-data
hostPath:
type: Directory
path: {{ .Values.userspace.userData }}
path: '{{ .Values.userspace.userData }}'
- name: upload-appdata
hostPath:
type: Directory
path: {{ .Values.userspace.appCache}}
path: '{{ .Values.userspace.appCache}}'
- name: uploads-temp
hostPath:
type: DirectoryOrCreate
path: {{ .Values.userspace.appCache }}/files/uploadstemp
path: '{{ .Values.userspace.appCache }}/files/uploadstemp'
- name: terminus-sidecar-config
configMap:
name: sidecar-upload-configs
items:
- key: envoy.yaml
path: envoy.yaml
- name: app
emptyDir: {}
- name: nginx-confd
emptyDir: {}
@@ -606,6 +668,16 @@ data:
redis_password: {{ $redis_password }}
pg_password: {{ $pg_password }}
---
apiVersion: v1
kind: Secret
metadata:
name: files-frontend-nats-secrets
namespace: user-system-{{ .Values.bfl.username }}
data:
files_frontend_nats_password: {{ $files_frontend_nats_password }}
type: Opaque
#---
#apiVersion: apr.bytetrade.io/v1alpha1
#kind: MiddlewareRequest
@@ -646,6 +718,31 @@ spec:
name: zinc-files-secrets
namespace: zinc-files
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: files-frontend-nat
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: files-frontend
appNamespace: user-space-{{ .Values.bfl.username }}
middleware: nats
nats:
password:
valueFrom:
secretKeyRef:
key: files_frontend_nats_password
name: files-frontend-nats-secrets
refs:
- appName: files-server
appNamespace: os-system
subjects:
- name: files-notify
perm:
- pub
- sub
user: user-system-{{ .Values.bfl.username }}-files-frontend
---
apiVersion: v1
@@ -690,11 +787,14 @@ data:
prefix: "/upload"
route:
cluster: upload_original_dst
timeout: 1800s
idle_timeout: 1800s
- match:
prefix: "/"
route:
cluster: original_dst
timeout: 600s
timeout: 1800s
idle_timeout: 1800s
http_protocol_options:
accept_http_10: true
http_filters:
@@ -716,6 +816,7 @@ data:
- prefix: x-unauth-
- exact: x-authorization
- exact: x-bfl-user
- exact: x-real-ip
- exact: terminus-nonce
headers_to_add:
- key: X-Forwarded-Method
@@ -781,9 +882,11 @@ data:
clusters:
- name: original_dst
connect_timeout: 5000s
connect_timeout: 120s
type: ORIGINAL_DST
lb_policy: CLUSTER_PROVIDED
common_http_protocol_options:
idle_timeout: 10s
- name: upload_original_dst
connect_timeout: 5000s
type: LOGICAL_DNS

View File

@@ -1,4 +1,3 @@
bfl:
nodeport: 30883
nodeport_ingress_http: 30083
@@ -46,4 +45,4 @@ os:
appKey: '${ks[0]}'
appSecret: test
kubesphere:
redis_password: ""
redis_password: ""

View File

@@ -0,0 +1,646 @@
{{- $share_secret := (lookup "v1" "Secret" "os-system" "knowledge-share-secrets") -}}
{{- $redis_password := "" -}}
{{ if $share_secret -}}
{{ $redis_password = (index $share_secret "data" "redis_password") }}
{{ else -}}
{{ $redis_password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $redis_password_data := "" -}}
{{ $redis_password_data = $redis_password | b64dec }}
{{- $pg_password := "" -}}
{{ if $share_secret -}}
{{ $pg_password = (index $share_secret "data" "pg_password") }}
{{ else -}}
{{ $pg_password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $knowledge_nats_secret := (lookup "v1" "Secret" "os-system" "knowledge-secrets") -}}
{{- $nat_password := "" -}}
{{ if $knowledge_nats_secret -}}
{{ $nat_password = (index $knowledge_nats_secret "data" "nat_password") }}
{{ else -}}
{{ $nat_password = randAlphaNum 16 | b64enc }}
{{- end -}}
---
apiVersion: v1
kind: Secret
metadata:
name: knowledge-secrets
namespace: os-system
type: Opaque
data:
nat_password: {{ $nat_password }}
---
apiVersion: v1
kind: Secret
metadata:
name: knowledge-share-secrets
namespace: os-system
type: Opaque
data:
pg_password: {{ $pg_password }}
redis_password: {{ $redis_password }}
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: knowledge-pg
namespace: os-system
spec:
app: knowledge
appNamespace: os-system
middleware: postgres
postgreSQL:
user: knowledge_os_system
password:
valueFrom:
secretKeyRef:
key: pg_password
name: knowledge-share-secrets
databases:
- name: knowledge
extensions:
- pg_trgm
- btree_gin
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: knowledge-redis
namespace: os-system
spec:
app: rss
appNamespace: os-system
middleware: redis
redis:
password:
valueFrom:
secretKeyRef:
key: redis_password
name: knowledge-share-secrets
namespace: knowledge
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: knowledge-nat
namespace: os-system
spec:
app: knowledge
appNamespace: os-system
middleware: nats
nats:
password:
valueFrom:
secretKeyRef:
key: nat_password
name: knowledge-secrets
refs:
- appName: download
appNamespace: os-system
subjects:
- name: download_status
perm:
- pub
- sub
user: os-system-knowledge
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: knowledge
namespace: os-system
labels:
app: knowledge
applications.app.bytetrade.io/author: bytetrade.io
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: knowledge
template:
metadata:
labels:
app: knowledge
spec:
serviceAccount: os-internal
serviceAccountName: os-internal
securityContext:
runAsUser: 0
runAsNonRoot: false
initContainers:
- name: init-data
image: busybox:1.28
securityContext:
privileged: true
runAsNonRoot: false
runAsUser: 0
volumeMounts:
- name: userspace-dir
mountPath: /data
- name: cache-dir
mountPath: /appCache
command:
- sh
- -c
- |
chown -R 1000:1000 /data && \
chown -R 1000:1000 /appCache
- name: init-container
image: 'postgres:16.0-alpine3.18'
command:
- sh
- '-c'
- >-
echo -e "Checking for the availability of PostgreSQL Server deployment"; until psql -h $PGHOST -p $PGPORT -U $PGUSER -d $PGDB -c "SELECT 1"; do sleep 1; printf "-"; done; sleep 5; echo -e " >> PostgreSQL DB Server has started";
env:
- name: PGHOST
value: citus-headless.os-system
- name: PGPORT
value: "5432"
- name: PGUSER
value: knowledge_os_system
- name: PGPASSWORD
value: {{ $pg_password | b64dec }}
- name: PGDB
value: os_system_knowledge
containers:
- name: knowledge
image: "beclab/knowledge-base-api:v0.12.5"
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
ports:
- containerPort: 3010
env:
- name: BACKEND_URL
value: http://127.0.0.1:8080
- name: RSSHUB_URL
value: 'http://rss-server.os-system:1200'
- name: UPLOAD_SAVE_PATH
value: '/data/'
- name: SEARCH_URL
value: 'http://search3.os-system:80'
- name: REDIS_PASSWORD
value: {{ $redis_password_data }}
- name: REDIS_ADDR
value: redis-cluster-proxy.os-system
- name: PG_USERNAME
value: knowledge_os_system
- name: PG_PASSWORD
value: {{ $pg_password | b64dec }}
- name: PG_HOST
value: citus-headless.os-system
- name: PG_PORT
value: "5432"
- name: PG_DATABASE
value: os_system_knowledge
- name: DOWNLOAD_URL
value: http://download-svc.os-system:3080
- name: YTDLP_DOWNLOAD_URL
value: http://download-svc.os-system:3082
- name: NATS_HOST
value: nats
- name: NATS_PORT
value: "4222"
- name: NATS_USERNAME
value: os-system-knowledge
- name: NATS_PASSWORD
value: {{ $nat_password | b64dec }}
- name: NATS_SUBJECT
value: terminus.os-system.download_status
- name: SOCKET_URL
value: 'http://localhost:40010'
volumeMounts:
- name: userspace-dir
mountPath: /data
resources:
requests:
cpu: 20m
memory: 50Mi
limits:
cpu: "1"
memory: 1Gi
- name: backend-server
image: "beclab/recommend-backend:v0.12.0"
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
env:
- name: LISTEN_ADDR
value: 127.0.0.1:8080
- name: REDIS_PASSWORD
value: {{ $redis_password_data }}
- name: REDIS_ADDR
value: redis-cluster-proxy.os-system:6379
- name: RSS_HUB_URL
value: 'http://rss-server.os-system:1200/'
- name: WE_CHAT_REFRESH_FEED_URL
value: https://recommend-wechat-prd.bttcdn.com/api/wechat/entries
- name: WECHAT_ENTRY_CONTENT_GET_API_URL
value: https://recommend-wechat-prd.bttcdn.com/api/wechat/entry/content
- name: PG_USERNAME
value: knowledge_os_system
- name: PG_PASSWORD
value: {{ $pg_password | b64dec }}
- name: PG_HOST
value: citus-headless.os-system
- name: PG_PORT
value: "5432"
- name: PG_DATABASE
value: os_system_knowledge
- name: WATCH_DIR
value: /data/
- name: YT_DLP_API_URL
value: http://download-svc.os-system:3082/api/v1/get_metadata
- name: DOWNLOAD_API_URL
value: http://download-svc.os-system:3080/api
volumeMounts:
- name: userspace-dir
mountPath: /data
ports:
- containerPort: 8080
resources:
requests:
cpu: 20m
memory: 50Mi
limits:
cpu: "800m"
memory: 400Mi
- name: sync
image: "beclab/recommend-sync:v0.12.0"
securityContext:
runAsUser: 0
runAsNonRoot: false
env:
- name: USERSPACE_DIRECTORY
value: /data
- name: KNOWLEDGE_BASE_API_URL
value: http://127.0.0.1:3010
- name: PG_HOST
value: citus-headless.os-system
- name: PG_USERNAME
value: knowledge_os_system
- name: PG_PASSWORD
value: {{ $pg_password | b64dec }}
- name: PG_DATABASE
value: os_system_knowledge
- name: PG_PORT
value: "5432"
- name: TERMINUS_RECOMMEND_REDIS_ADDR
value: redis-cluster-proxy.os-system:6379
- name: TERMINUS_RECOMMEND_REDIS_PASSOWRD
value: {{ $redis_password_data }}
volumeMounts:
- name: userspace-dir
mountPath: /data
- name: crawler
image: "beclab/recommend-crawler:v0.12.1"
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
env:
- name: KNOWLEDGE_BASE_API_URL
value: http://127.0.0.1:3010
resources:
requests:
cpu: 20m
memory: 50Mi
limits:
cpu: "800m"
memory: 800Mi
volumeMounts:
- name: cache-dir
mountPath: /appCache
- name: terminus-ws-sidecar
image: 'beclab/ws-gateway:v1.0.4'
imagePullPolicy: IfNotPresent
command:
- /ws-gateway
env:
- name: WS_PORT
value: '3010'
- name: WS_URL
value: /knowledge/websocket/message
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumes:
- name: userspace-dir
hostPath:
type: Directory
path: '{{ .Values.rootPath }}/rootfs/userspace'
- name: cache-dir
hostPath:
path: '{{ .Values.rootPath }}/userdata/Cache/rss'
type: DirectoryOrCreate
- name: terminus-sidecar-config
configMap:
name: sidecar-ws-configs
items:
- key: envoy.yaml
path: envoy.yaml
---
apiVersion: v1
kind: Service
metadata:
name: rss-svc
namespace: os-system
spec:
type: ClusterIP
selector:
app: knowledge
ports:
- name: "backend-server"
protocol: TCP
port: 8080
targetPort: 8080
- name: "knowledge-base-api"
protocol: TCP
port: 3010
targetPort: 3010
- name: "knowledge-websocket"
protocol: TCP
port: 40010
targetPort: 40010
---
apiVersion: v1
kind: Service
metadata:
name: knowledge-base-api
namespace: os-system
spec:
type: ClusterIP
selector:
app: systemserver
ports:
- protocol: TCP
name: knowledge-api
port: 3010
targetPort: 3010
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: download-nat
namespace: os-system
spec:
app: download
appNamespace: os-system
middleware: nats
nats:
password:
valueFrom:
secretKeyRef:
key: nat_password
name: knowledge-secrets
refs: []
subjects:
- name: download_status
permission:
pub: allow
sub: allow
export:
- appName: knowledge
sub: allow
pub: allow
user: os-system-download
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: download
namespace: os-system
labels:
app: download
applications.app.bytetrade.io/author: bytetrade.io
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: download
template:
metadata:
labels:
app: download
spec:
serviceAccount: os-internal
serviceAccountName: os-internal
securityContext:
runAsUser: 0
runAsNonRoot: false
initContainers:
- name: init-data
image: busybox:1.28
securityContext:
privileged: true
runAsNonRoot: false
runAsUser: 0
volumeMounts:
- name: config-dir
mountPath: /config
- name: download-dir
mountPath: /downloads
command:
- sh
- -c
- |
chown -R 1000:1000 /config && \
chown -R 1000:1000 /downloads
- name: init-container
image: 'postgres:16.0-alpine3.18'
command:
- sh
- '-c'
- >-
echo -e "Checking for the availability of PostgreSQL Server deployment"; until psql -h $PGHOST -p $PGPORT -U $PGUSER -d $PGDB -c "SELECT 1"; do sleep 1; printf "-"; done; sleep 5; echo -e " >> PostgreSQL DB Server has started";
env:
- name: PGHOST
value: citus-headless.os-system
- name: PGPORT
value: "5432"
- name: PGUSER
value: knowledge_os_system
- name: PGPASSWORD
value: {{ $pg_password | b64dec }}
- name: PGDB
value: os_system_knowledge
containers:
- name: aria2
image: "beclab/aria2:v0.0.4"
imagePullPolicy: IfNotPresent
securityContext:
runAsNonRoot: false
runAsUser: 0
ports:
- containerPort: 6800
- containerPort: 6888
env:
- name: RPC_SECRET
value: kubespider
- name: PUID
value: "1000"
- name: PGID
value: "1000"
volumeMounts:
- name: download-dir
mountPath: /downloads
resources:
requests:
cpu: 20m
memory: 50Mi
limits:
cpu: "1"
memory: 300Mi
- name: yt-dlp
image: "beclab/yt-dlp:v0.12.2"
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
ports:
- containerPort: 3082
env:
- name: PG_USERNAME
value: knowledge_os_system
- name: PG_PASSWORD
value: {{ $pg_password | b64dec }}
- name: PG_HOST
value: citus-headless.os-system
- name: PG_PORT
value: "5432"
- name: PG_DATABASE
value: os_system_knowledge
- name: REDIS_HOST
value: redis-cluster-proxy.os-system
- name: REDIS_PASSWORD
value: {{ $redis_password | b64dec }}
- name: NATS_HOST
value: nats
- name: NATS_PORT
value: "4222"
- name: NATS_USERNAME
value: os-system-download
- name: NATS_PASSWORD
value: {{ $nat_password | b64dec }}
- name: NATS_SUBJECT
value: terminus.os-system.download_status
volumeMounts:
- name: config-dir
mountPath: /app/config
- name: download-dir
mountPath: /app/downloads
resources:
requests:
cpu: 20m
memory: 50Mi
limits:
cpu: "1"
memory: 300Mi
- name: download-spider
image: "beclab/download-spider:v0.12.2"
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
env:
- name: PG_USERNAME
value: knowledge_os_system
- name: PG_PASSWORD
value: {{ $pg_password | b64dec }}
- name: PG_HOST
value: citus-headless.os-system
- name: PG_PORT
value: "5432"
- name: PG_DATABASE
value: os_system_knowledge
- name: REDIS_HOST
value: redis-cluster-proxy.os-system
- name: REDIS_PASSWORD
value: {{ $redis_password | b64dec }}
- name: NATS_HOST
value: nats
- name: NATS_PORT
value: "4222"
- name: NATS_USERNAME
value: os-system-download
- name: NATS_PASSWORD
value: {{ $nat_password | b64dec }}
- name: NATS_SUBJECT
value: terminus.os-system.download_status
volumeMounts:
- name: download-dir
mountPath: /downloads
ports:
- containerPort: 3080
resources:
requests:
cpu: 20m
memory: 50Mi
limits:
cpu: "1"
memory: 300Mi
volumes:
- name: config-dir
hostPath:
type: DirectoryOrCreate
path: '{{ .Values.rootPath }}/userdata/Cache/download'
- name: download-dir
hostPath:
type: DirectoryOrCreate
path: '{{ .Values.rootPath }}/rootfs/userspace'
---
apiVersion: v1
kind: Service
metadata:
name: download-svc
namespace: os-system
spec:
type: ClusterIP
selector:
app: download
ports:
- name: "download-spider"
protocol: TCP
port: 3080
targetPort: 3080
- name: "aria2-server"
protocol: TCP
port: 6800
targetPort: 6800
- name: ytdlp-server
protocol: TCP
port: 3082
targetPort: 3082

View File

@@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -1,26 +0,0 @@
apiVersion: v2
name: knowledge
description: A Helm chart for Kubernetes
maintainers:
- name: bytetrade
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.0.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"

View File

@@ -1,62 +0,0 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "knowledge.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "knowledge.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "knowledge.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "knowledge.labels" -}}
helm.sh/chart: {{ include "knowledge.chart" . }}
{{ include "knowledge.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "knowledge.selectorLabels" -}}
app.kubernetes.io/name: {{ include "knowledge.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "knowledge.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "knowledge.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@@ -1,552 +0,0 @@
{{- $namespace := printf "%s%s" "user-system-" .Values.bfl.username -}}
{{- $knowledge_secret := (lookup "v1" "Secret" $namespace "rss-secrets") -}}
{{- $redis_password := "" -}}
{{ if $knowledge_secret -}}
{{ $redis_password = (index $knowledge_secret "data" "redis_password") }}
{{ else -}}
{{ $redis_password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $redis_password_data := "" -}}
{{ $redis_password_data = $redis_password | b64dec }}
{{- $pg_password := "" -}}
{{ if $knowledge_secret -}}
{{ $pg_password = (index $knowledge_secret "data" "pg_password") }}
{{ else -}}
{{ $pg_password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $knowledge_nats_secret := (lookup "v1" "Secret" $namespace "knowledge-secrets") -}}
{{- $nat_password := "" -}}
{{ if $knowledge_nats_secret -}}
{{ $nat_password = (index $knowledge_nats_secret "data" "nat_password") }}
{{ else -}}
{{ $nat_password = randAlphaNum 16 | b64enc }}
{{- end -}}
---
apiVersion: v1
kind: Secret
metadata:
name: knowledge-secrets
namespace: user-system-{{ .Values.bfl.username }}
type: Opaque
data:
pg_password: {{ $pg_password }}
nat_password: {{ $nat_password }}
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: knowledge-pg
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: knowledge
appNamespace: {{ .Release.Namespace }}
middleware: postgres
postgreSQL:
user: knowledge_{{ .Values.bfl.username }}
password:
valueFrom:
secretKeyRef:
key: pg_password
name: knowledge-secrets
databases:
- name: knowledge
extensions:
- pg_trgm
- btree_gin
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: knowledge-nat
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: knowledge
appNamespace: {{ .Release.Namespace }}
middleware: nats
nats:
password:
valueFrom:
secretKeyRef:
key: nat_password
name: knowledge-secrets
refs:
- appName: download
appNamespace: {{ .Release.Namespace }}
subjects:
- name: download_status
perm:
- pub
- sub
user: user-system-{{ .Values.bfl.username }}-knowledge
---
apiVersion: v1
kind: ConfigMap
metadata:
name: knowledge-secrets-auth
namespace: {{ .Release.Namespace }}
data:
redis_password: {{ $redis_password_data }}
redis_addr: redis-cluster-proxy.user-system-{{ .Values.bfl.username }}:6379
redis_host: redis-cluster-proxy.user-system-{{ .Values.bfl.username }}
redis_port: '6379'
---
apiVersion: v1
kind: ConfigMap
metadata:
name: knowledge-userspace-data
namespace: {{ .Release.Namespace }}
data:
appData: "{{ .Values.userspace.appData }}"
appCache: "{{ .Values.userspace.appCache }}"
username: "{{ .Values.bfl.username }}"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: knowledge
namespace: {{ .Release.Namespace }}
labels:
app: knowledge
applications.app.bytetrade.io/author: bytetrade.io
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: knowledge
template:
metadata:
labels:
app: knowledge
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
initContainers:
- name: init-data
image: busybox:1.28
securityContext:
privileged: true
runAsNonRoot: false
runAsUser: 0
volumeMounts:
- name: juicefs
mountPath: /juicefs
command:
- sh
- -c
- |
chown -R 1000:1000 /juicefs
- name: init-container
image: 'postgres:16.0-alpine3.18'
command:
- sh
- '-c'
- >-
echo -e "Checking for the availability of PostgreSQL Server deployment"; until psql -h $PGHOST -p $PGPORT -U $PGUSER -d $PGDB -c "SELECT 1"; do sleep 1; printf "-"; done; sleep 5; echo -e " >> PostgreSQL DB Server has started";
env:
- name: PGHOST
value: citus-master-svc.user-system-{{ .Values.bfl.username }}
- name: PGPORT
value: "5432"
- name: PGUSER
value: knowledge_{{ .Values.bfl.username }}
- name: PGPASSWORD
value: {{ $pg_password | b64dec }}
- name: PGDB
value: user_space_{{ .Values.bfl.username }}_knowledge
containers:
- name: knowledge
image: "beclab/knowledge-base-api:v0.1.57"
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
ports:
- containerPort: 3010
env:
- name: BACKEND_URL
value: http://127.0.0.1:8080
- name: RSSHUB_URL
value: 'http://rss-server.os-system:1200'
- name: SEARCH_URL
value: 'http://search3.os-system:80'
- name: REDIS_PASSWORD
valueFrom:
configMapKeyRef:
name: knowledge-secrets-auth
key: redis_password
- name: REDIS_ADDR
valueFrom:
configMapKeyRef:
name: knowledge-secrets-auth
key: redis_addr
- name: PDF_SAVE_PATH
value: /data/Home/Documents/Pdf/
- name: PG_USERNAME
value: knowledge_{{ .Values.bfl.username }}
- name: PG_PASSWORD
value: {{ $pg_password | b64dec }}
- name: PG_HOST
value: citus-master-svc.user-system-{{ .Values.bfl.username }}
- name: PG_PORT
value: "5432"
- name: PG_DATABASE
value: user_space_{{ .Values.bfl.username }}_knowledge
- name: DOWNLOAD_URL
value: http://download-svc.user-space-{{ .Values.bfl.username }}:3080
- name: BFL_USER_NAME
value: "{{ .Values.bfl.username }}"
- name: SETTING_URL
value: http://system-server.user-system-{{ .Values.bfl.username }}
- name: NATS_HOST
value: nats.user-system-{{ .Values.bfl.username }}
- name: NATS_PORT
value: "4222"
- name: NATS_USERNAME
value: user-system-{{ .Values.bfl.username }}-knowledge
- name: NATS_PASSWORD
value: {{ $nat_password | b64dec }}
- name: NATS_SUBJECT
value: "terminus.{{ .Release.Namespace }}.download_status"
- name: SOCKET_URL
value: 'http://localhost:40010'
volumeMounts:
- name: watch-dir
mountPath: /data/Home/Documents
resources:
requests:
cpu: 20m
memory: 50Mi
limits:
cpu: "1"
memory: 1Gi
- name: backend-server
image: "beclab/recommend-backend:v0.0.25"
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
env:
- name: LISTEN_ADDR
value: 127.0.0.1:8080
- name: REDIS_PASSWORD
valueFrom:
configMapKeyRef:
name: knowledge-secrets-auth
key: redis_password
- name: REDIS_ADDR
valueFrom:
configMapKeyRef:
name: knowledge-secrets-auth
key: redis_addr
- name: OS_SYSTEM_SERVER
value: system-server.user-system-{{ .Values.bfl.username }}
- name: OS_APP_SECRET
value: '{{ .Values.os.wise.appSecret }}'
- name: OS_APP_KEY
value: {{ .Values.os.wise.appKey }}
- name: RSS_HUB_URL
value: 'http://rss-server.os-system:1200/'
- name: WE_CHAT_REFRESH_FEED_URL
value: https://recommend-wechat-prd.bttcdn.com/api/wechat/entries
- name: WECHAT_ENTRY_CONTENT_GET_API_URL
value: https://recommend-wechat-prd.bttcdn.com/api/wechat/entry/content
- name: PG_USERNAME
value: knowledge_{{ .Values.bfl.username }}
- name: PG_PASSWORD
value: {{ $pg_password | b64dec }}
- name: PG_HOST
value: citus-master-svc.user-system-{{ .Values.bfl.username }}
- name: PG_PORT
value: "5432"
- name: PG_DATABASE
value: user_space_{{ .Values.bfl.username }}_knowledge
- name: WATCH_DIR
value: /data/Home/Downloads
- name: NOTIFY_SERVER
value: fsnotify-svc.user-system-{{ .Values.bfl.username }}:5079
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: CONTAINER_NAME
value: backend-server
- name: YT_DLP_API_URL
value: http://download-svc.user-space-{{ .Values.bfl.username }}:3082/api/v1/get_metadata
- name: DOWNLOAD_API_URL
value: http://download-svc.user-space-{{ .Values.bfl.username }}:3080/api/termius/download
- name: SETTING_API_URL
value: http://system-server.user-system-{{ .Values.bfl.username }}/legacy/v1alpha1/service.settings/v1/api/cookie/retrieve
volumeMounts:
- name: watch-dir
mountPath: /data/Home/Downloads
ports:
- containerPort: 8080
resources:
requests:
cpu: 20m
memory: 50Mi
limits:
cpu: "800m"
memory: 400Mi
- name: sync
image: "beclab/recommend-sync:v0.0.15"
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
env:
- name: TERMIUS_USER_NAME
value: "{{ .Values.bfl.username }}"
- name: JUICEFS_ROOT_DIRECTORY
value: /juicefs
- name: KNOWLEDGE_BASE_API_URL
value: http://127.0.0.1:3010
- name: PG_HOST
value: citus-master-svc.user-system-{{ .Values.bfl.username }}
- name: PG_USERNAME
value: knowledge_{{ .Values.bfl.username }}
- name: PG_PASSWORD
value: {{ $pg_password | b64dec }}
- name: PG_DATABASE
value: user_space_{{ .Values.bfl.username }}_knowledge
- name: PG_PORT
value: "5432"
- name: TERMINUS_RECOMMEND_REDIS_ADDR
valueFrom:
configMapKeyRef:
name: knowledge-secrets-auth
key: redis_addr
- name: TERMINUS_RECOMMEND_REDIS_PASSOWRD
valueFrom:
configMapKeyRef:
name: knowledge-secrets-auth
key: redis_password
volumeMounts:
- name: juicefs
mountPath: /juicefs
- name: crawler
image: "beclab/recommend-crawler:v0.0.14"
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
env:
- name: TERMIUS_USER_NAME
value: "{{ .Values.bfl.username }}"
- name: KNOWLEDGE_BASE_API_URL
value: http://127.0.0.1:3010
resources:
requests:
cpu: 20m
memory: 50Mi
limits:
cpu: "800m"
memory: 800Mi
- name: terminus-ws-sidecar
image: 'beclab/ws-gateway:v1.0.4'
imagePullPolicy: IfNotPresent
command:
- /ws-gateway
env:
- name: WS_PORT
value: '3010'
- name: WS_URL
value: /knowledge/websocket/message
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumes:
- name: watch-dir
hostPath:
type: Directory
path: {{ .Values.userspace.userData }}
- name: juicefs
hostPath:
type: DirectoryOrCreate
path: {{ .Values.userspace.appData }}/rss/data
- name: terminus-sidecar-config
configMap:
name: sidecar-ws-configs
items:
- key: envoy.yaml
path: envoy.yaml
---
apiVersion: v1
kind: Service
metadata:
name: rss-svc
namespace: {{ .Release.Namespace }}
spec:
type: ClusterIP
selector:
app: knowledge
ports:
- name: "backend-server"
protocol: TCP
port: 8080
targetPort: 8080
# - name: "rss-sdk"
# protocol: TCP
# port: 3000
# targetPort: 3000
- name: "knowledge-base-api"
protocol: TCP
port: 3010
targetPort: 3010
- name: "knowledge-websocket"
protocol: TCP
port: 40010
targetPort: 40010
---
apiVersion: v1
kind: Service
metadata:
name: knowledge-base-api
namespace: user-system-{{ .Values.bfl.username }}
spec:
type: ClusterIP
selector:
app: systemserver
ports:
- protocol: TCP
name: knowledge-api
port: 3010
targetPort: 3010
---
#apiVersion: v1
#data:
# mappings: |
# {
# "properties": {
# "@timestamp": {
# "type": "date",
# "index": true,
# "store": false,
# "sortable": true,
# "aggregatable": true,
# "highlightable": false
# },
# "_id": {
# "type": "keyword",
# "index": true,
# "store": false,
# "sortable": true,
# "aggregatable": true,
# "highlightable": false
# },
# "content": {
# "type": "text",
# "index": true,
# "store": true,
# "sortable": false,
# "aggregatable": false,
# "highlightable": true
# },
# "created": {
# "type": "numeric",
# "index": true,
# "store": false,
# "sortable": true,
# "aggregatable": true,
# "highlightable": false
# },
# "format_name": {
# "type": "text",
# "index": true,
# "store": false,
# "sortable": false,
# "aggregatable": false,
# "highlightable": false
# },
# "md5": {
# "type": "text",
# "analyzer": "keyword",
# "index": true,
# "store": false,
# "sortable": false,
# "aggregatable": false,
# "highlightable": false
# },
# "meta": {
# "type": "text",
# "index": true,
# "store": false,
# "sortable": false,
# "aggregatable": false,
# "highlightable": false
# },
# "name": {
# "type": "text",
# "index": true,
# "store": false,
# "sortable": false,
# "aggregatable": false,
# "highlightable": false
# },
# "where": {
# "type": "text",
# "analyzer": "keyword",
# "index": true,
# "store": false,
# "sortable": false,
# "aggregatable": false,
# "highlightable": false
# }
# }
# }
#kind: ConfigMap
#metadata:
# name: zinc-knowledge
# namespace: user-system-{{ .Values.bfl.username }}
#---
apiVersion: apr.bytetrade.io/v1alpha1
kind: SysEventRegistry
metadata:
name: konwledgebase-recommend-install-cb
namespace: {{ .Release.Namespace }}
spec:
type: subscriber
event: recommend.install
callback: http://rss-svc.{{ .Release.Namespace }}:3010/knowledge/algorithm/recommend/install
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: SysEventRegistry
metadata:
name: konwledgebase-recommend-uninstall-cb
namespace: {{ .Release.Namespace }}
spec:
type: subscriber
event: recommend.uninstall
callback: http://rss-svc.{{ .Release.Namespace }}:3010/knowledge/algorithm/recommend/uninstall

View File

@@ -1,43 +0,0 @@
bfl:
nodeport: 30883
nodeport_ingress_http: 30083
nodeport_ingress_https: 30082
username: 'test'
url: 'test'
nodeName: test
pvc:
userspace: test
userspace:
userData: test/Home
appData: test/Data
appCache: test
dbdata: test
docs:
nodeport: 30881
desktop:
nodeport: 30180
os:
portfolio:
appKey: '${ks[0]}'
appSecret: test
vault:
appKey: '${ks[0]}'
appSecret: test
desktop:
appKey: '${ks[0]}'
appSecret: test
message:
appKey: '${ks[0]}'
appSecret: test
wise:
appKey: '${ks[0]}'
appSecret: test
search:
appKey: '${ks[0]}'
appSecret: test
search2:
appKey: '${ks[0]}'
appSecret: test
kubesphere:
redis_password: ""

View File

@@ -43,7 +43,14 @@ spec:
labels:
app: appstore
io.bytetrade.app: "true"
annotations:
instrumentation.opentelemetry.io/inject-go: "olares-instrumentation"
instrumentation.opentelemetry.io/go-container-names: "appstore-backend"
instrumentation.opentelemetry.io/otel-go-auto-target-exe: "/opt/app/market"
instrumentation.opentelemetry.io/inject-nginx: "olares-instrumentation"
instrumentation.opentelemetry.io/inject-nginx-container-names: "appstore"
spec:
priorityClassName: "system-cluster-critical"
initContainers:
- args:
- -it
@@ -83,14 +90,33 @@ spec:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: nginx-init
image: beclab/market-frontend:v0.3.11
imagePullPolicy: IfNotPresent
volumeMounts:
- name: app
mountPath: /cp_app
- name: nginx-confd
mountPath: /confd
command:
- sh
- -c
- |
cp -rf /app/* /cp_app/. && cp -rf /etc/nginx/conf.d/* /confd/.
containers:
- name: appstore
image: beclab/market-frontend:v0.3.2
image: beclab/docker-nginx-headers-more:ubuntu-v0.1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: app
mountPath: /app
- name: nginx-confd
mountPath: /etc/nginx/conf.d
- name: appstore-backend
image: beclab/market-backend:v0.3.2
image: beclab/market-backend:v0.3.11
imagePullPolicy: IfNotPresent
ports:
- containerPort: 81
@@ -170,7 +196,7 @@ spec:
fieldRef:
fieldPath: status.podIP
- name: terminus-ws-sidecar
image: 'beclab/ws-gateway:v1.0.3'
image: 'beclab/ws-gateway:v1.0.5'
command:
- /ws-gateway
env:
@@ -191,8 +217,12 @@ spec:
path: envoy.yaml
- name: opt-data
hostPath:
path: {{ .Values.userspace.appData}}/appstore/data
path: '{{ .Values.userspace.appData}}/appstore/data'
type: DirectoryOrCreate
- name: app
emptyDir: {}
- name: nginx-confd
emptyDir: {}
---
apiVersion: v1

View File

@@ -1,4 +1,3 @@
bfl:
nodeport: 30883
nodeport_ingress_http: 30083
@@ -42,4 +41,4 @@ os:
appstore:
marketProvider: ''
kubesphere:
redis_password: ""
redis_password: ""

View File

@@ -0,0 +1,230 @@
{{- $namespace := printf "%s" "os-system" -}}
{{- $notifications_secret := (lookup "v1" "Secret" $namespace "notifications-secrets") -}}
{{- $pg_password := "" -}}
{{ if $notifications_secret -}}
{{ $pg_password = (index $notifications_secret "data" "pg_password") }}
{{ else -}}
{{ $pg_password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $nats_password := "" -}}
{{ if $notifications_secret -}}
{{ $nats_password = (index $notifications_secret "data" "nats_password") }}
{{ else -}}
{{ $nats_password = randAlphaNum 16 | b64enc }}
{{- end -}}
---
apiVersion: v1
kind: Secret
metadata:
name: notifications-secrets
namespace: {{ .Release.Namespace }}
type: Opaque
data:
pg_password: {{ $pg_password }}
nats_password: {{ $nats_password }}
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: notifications-pg
namespace: {{ .Release.Namespace }}
spec:
app: notifications
appNamespace: {{ .Release.Namespace }}
middleware: postgres
postgreSQL:
user: notifications_os_system
password:
valueFrom:
secretKeyRef:
key: pg_password
name: notifications-secrets
databases:
- name: notifications
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: notifications-nats
namespace: {{ .Release.Namespace }}
spec:
app: notifications
appNamespace: {{ .Release.Namespace }}
middleware: nats
nats:
password:
valueFrom:
secretKeyRef:
key: nats_password
name: notifications-secrets
refs: [] # TODO: refs to notifications-proxy's subject
subjects:
- export:
- appName: notifications-proxy
pub: allow
sub: allow
- appName: lldap
pub: allow
sub: allow
- appName: ks-component
pub: allow
sub: allow
- appName: authelia
pub: allow
sub: allow
name: system.notification
permission:
pub: allow
sub: allow
- export:
- appName: lldap
pub: allow
sub: allow
- appName: vault-server
pub: deny
sub: allow
- appName: seahub
pub: deny
sub: allow
- appName: knowledge
pub: deny
sub: allow
name: system.users
permission:
pub: allow
sub: allow
user: os-system-notifications
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: notifications-server
namespace: {{ .Release.Namespace }}
labels:
app: notifications-server
applications.app.bytetrade.io/author: bytetrade.io
annotations:
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: notifications-server
template:
metadata:
labels:
app: notifications-server
spec:
initContainers:
- name: init-container
image: 'postgres:16.0-alpine3.18'
command:
- sh
- '-c'
- >-
echo -e "Checking for the availability of PostgreSQL Server deployment"; until psql -h $PGHOST -p $PGPORT -U $PGUSER -d $PGDB -c "SELECT 1"; do sleep 1; printf "-"; done; sleep 5; echo -e " >> PostgreSQL DB Server has started";
env:
- name: PGHOST
value: citus-headless.os-system
- name: PGPORT
value: "5432"
- name: PGUSER
value: notifications_os_system
- name: PGPASSWORD
valueFrom:
secretKeyRef:
key: pg_password
name: notifications-secrets
- name: PGDB
value: os_system_notifications
containers:
- name: notifications-api
image: beclab/notifications-api:v1.12.3
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3010
protocol: TCP
env:
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
key: pg_password
name: notifications-secrets
- name: PRISMA_ENGINES_CHECKSUM_IGNORE_MISSING
value: '1'
- name: DATABASE_URL
value: postgres://notifications_os_system:$(DATABASE_PASSWORD)@citus-headless.os-system/os_system_notifications?sslmode=disable
- name: NATS_HOST
value: nats
- name: NATS_PORT
value: "4222"
- name: NATS_USERNAME
value: os-system-notifications
- name: NATS_PASSWORD
valueFrom:
secretKeyRef:
key: nats_password
name: notifications-secrets
- name: NATS_SUBJECT
value: "terminus.{{ .Release.Namespace }}.system.notification"
- name: NATS_SUBJECT_SYSTEM_USERS
value: "terminus.{{ .Release.Namespace }}.system.users"
livenessProbe:
tcpSocket:
port: 3010
initialDelaySeconds: 25
timeoutSeconds: 15
periodSeconds: 10
successThreshold: 1
failureThreshold: 8
readinessProbe:
tcpSocket:
port: 3010
initialDelaySeconds: 25
periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: notifications-service
namespace: {{ .Release.Namespace }}
spec:
type: ClusterIP
selector:
app: notifications-server
ports:
- name: "notifications-server"
protocol: TCP
port: 80
targetPort: 3010
---
apiVersion: v1
kind: Service
metadata:
name: notifications-server
namespace: {{ .Release.Namespace }}
spec:
type: ClusterIP
selector:
app: notifications-server
ports:
- name: "server"
protocol: TCP
port: 80
targetPort: 3010

View File

@@ -1,413 +1 @@
{{- $namespace := printf "%s%s" "user-system-" .Values.bfl.username -}}
{{- $notifications_secret := (lookup "v1" "Secret" $namespace "notifications-secrets") -}}
{{- $password := "" -}}
{{ if $notifications_secret -}}
{{ $password = (index $notifications_secret "data" "pg_password") }}
{{ else -}}
{{ $password = randAlphaNum 16 | b64enc }}
{{- end -}}
---
apiVersion: v1
kind: Secret
metadata:
name: notifications-secrets
namespace: user-system-{{ .Values.bfl.username }}
type: Opaque
data:
pg_password: {{ $password }}
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: notifications-pg
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: notifications
appNamespace: {{ .Release.Namespace }}
middleware: postgres
postgreSQL:
user: notifications_{{ .Values.bfl.username }}
password:
valueFrom:
secretKeyRef:
key: pg_password
name: notifications-secrets
databases:
- name: notifications
{{ if (eq .Values.debugVersion true) }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: notifications-deployment
namespace: {{ .Release.Namespace }}
labels:
app: notifications
applications.app.bytetrade.io/author: bytetrade.io
applications.app.bytetrade.io/name: notifications
applications.app.bytetrade.io/owner: '{{ .Values.bfl.username }}'
annotations:
applications.app.bytetrade.io/icon: https://file.bttcdn.com/appstore/notifications/icon.png
applications.app.bytetrade.io/title: Notifications
applications.app.bytetrade.io/version: '0.0.1'
applications.app.bytetrade.io/entrances: '[{"name":"notifications", "host":"notifications-service", "port":80,"title":"Notifications"}]'
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: notifications
template:
metadata:
labels:
app: notifications
io.bytetrade.app: "true"
spec:
initContainers:
- args:
- -it
- authelia-backend.os-system:9091
image: owncloudci/wait-for:latest
imagePullPolicy: IfNotPresent
name: check-auth
- name: terminus-sidecar-init
image: openservicemesh/init:v1.2.3
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
capabilities:
add:
- NET_ADMIN
runAsNonRoot: false
runAsUser: 0
command:
- /bin/sh
- -c
- |
iptables-restore --noflush <<EOF
# sidecar interception rules
*nat
:PROXY_IN_REDIRECT - [0:0]
:PROXY_INBOUND - [0:0]
-A PROXY_IN_REDIRECT -p tcp -j REDIRECT --to-port 15003
-A PROXY_INBOUND -p tcp --dport 15000 -j RETURN
-A PROXY_INBOUND -p tcp -j PROXY_IN_REDIRECT
-A PREROUTING -p tcp -j PROXY_INBOUND
COMMIT
EOF
env:
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
containers:
- name: notifications-frontend
image: beclab/notifications-frontend:v0.1.22
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
- name: terminus-envoy-sidecar
image: bytetrade/envoy:v1.25.11
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
ports:
- name: proxy-admin
containerPort: 15000
- name: proxy-inbound
containerPort: 15003
volumeMounts:
- name: terminus-sidecar-config
readOnly: true
mountPath: /etc/envoy/envoy.yaml
subPath: envoy.yaml
command:
- /usr/local/bin/envoy
- --log-level
- debug
- -c
- /etc/envoy/envoy.yaml
env:
- name: POD_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumes:
- name: terminus-sidecar-config
configMap:
name: sidecar-configs
items:
- key: envoy.yaml
path: envoy.yaml
# - name: REDIS_HOST
# value: localhost
# - name: REDIS_PORT
# value: "6379"
# - name: notifications-worker
# image: aboveos/notifications-worker:v0.1.2
# imagePullPolicy: IfNotPresent
# env:
# - name: MONGO_URL
# value: mongodb://admin:123456@localhost:27017
# - name: REDIS_HOST
# value: localhost
# - name: REDIS_CACHE_SERVICE_HOST
# value: localhost
# - name: REDIS_PORT
# value: "6379"
# - name: mongodb
# image: mongo:4.4.5
# env:
# - name: MONGO_INITDB_ROOT_USERNAME
# value: admin
# - name: MONGO_INITDB_ROOT_PASSWORD
# value: '123456'
# imagePullPolicy: IfNotPresent
# ports:
# - containerPort: 27017
# volumeMounts:
# - name: mongo-data
# mountPath: /data/db
# - name: redis
# image: redis:7.0.5-alpine3.16
# imagePullPolicy: IfNotPresent
# volumeMounts:
# - name: redis-data
# mountPath: /data
# volumes:
# - name: mongo-data
# hostPath:
# type: DirectoryOrCreate
# path: {{ .Values.userspace.appCache}}/notification/db
# - name: redis-data
# hostPath:
# type: DirectoryOrCreate
# path: {{ .Values.userspace.appCache}}/notification/redisdata
{{ end }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: notifications-server
namespace: {{ .Release.Namespace }}
labels:
app: notifications-server
applications.app.bytetrade.io/author: bytetrade.io
annotations:
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: notifications-server
template:
metadata:
labels:
app: notifications-server
spec:
initContainers:
- name: init-container
image: 'postgres:16.0-alpine3.18'
command:
- sh
- '-c'
- >-
echo -e "Checking for the availability of PostgreSQL Server deployment"; until psql -h $PGHOST -p $PGPORT -U $PGUSER -d $PGDB -c "SELECT 1"; do sleep 1; printf "-"; done; sleep 5; echo -e " >> PostgreSQL DB Server has started";
env:
- name: PGHOST
value: citus-master-svc.user-system-{{ .Values.bfl.username }}
- name: PGPORT
value: "5432"
- name: PGUSER
value: notifications_{{ .Values.bfl.username }}
- name: PGPASSWORD
value: {{ $password | b64dec }}
- name: PGDB
value: user_space_{{ .Values.bfl.username }}_notifications
containers:
- name: notifications-api
image: beclab/notifications-api:v0.1.25
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3010
protocol: TCP
env:
- name: OS_SYSTEM_SERVER
value: system-server.user-system-{{ .Values.bfl.username }}
- name: OS_APP_SECRET
value: '{{ .Values.os.notification.appSecret }}'
- name: OS_APP_KEY
value: {{ .Values.os.notification.appKey }}
- name: DATABASE_PASSWORD
value: {{ $password | b64dec }}
- name: PRISMA_ENGINES_CHECKSUM_IGNORE_MISSING
value: '1'
- name: DATABASE_URL
value: postgres://notifications_{{ .Values.bfl.username }}:$(DATABASE_PASSWORD)@citus-master-svc.user-system-{{ .Values.bfl.username }}/user_space_{{ .Values.bfl.username }}_notifications?sslmode=disable
livenessProbe:
tcpSocket:
port: 3010
initialDelaySeconds: 25
timeoutSeconds: 15
periodSeconds: 10
successThreshold: 1
failureThreshold: 8
readinessProbe:
tcpSocket:
port: 3010
initialDelaySeconds: 25
periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: notifications-service
namespace: {{ .Release.Namespace }}
{{ if (eq .Values.debugVersion true) }}
spec:
type: ClusterIP
selector:
app: notifications
ports:
- name: "notifications-frontend"
protocol: TCP
port: 80
targetPort: 80
{{ else }}
spec:
type: ClusterIP
selector:
app: notifications-server
ports:
- name: "notifications-server"
protocol: TCP
port: 80
targetPort: 3010
{{ end }}
---
apiVersion: v1
kind: Service
metadata:
name: notifications-server
namespace: {{ .Release.Namespace }}
spec:
type: ClusterIP
selector:
app: notifications-server
ports:
- name: "server"
protocol: TCP
port: 80
targetPort: 3010
---
apiVersion: sys.bytetrade.io/v1alpha1
kind: ProviderRegistry
metadata:
name: notifications-token-provider
namespace: user-system-{{ .Values.bfl.username }}
spec:
dataType: token
deployment: notifications-server
description: notifications provider
endpoint: notifications-server.{{ .Release.Namespace }}
group: service.notification
kind: provider
namespace: {{ .Release.Namespace }}
opApis:
- name: Create
uri: /termipass/create_token
version: v1
status:
state: active
---
apiVersion: sys.bytetrade.io/v1alpha1
kind: ProviderRegistry
metadata:
name: notifications-message-provider
namespace: user-system-{{ .Values.bfl.username }}
spec:
dataType: message
deployment: notifications-server
description: notifications provider
endpoint: notifications-server.{{ .Release.Namespace }}
group: service.notification
kind: provider
namespace: {{ .Release.Namespace }}
opApis:
- name: SendMassage
uri: /notification/create_job
- name: SystemMessage
uri: /notification/system/push
version: v1
status:
state: active
---
apiVersion: sys.bytetrade.io/v1alpha1
kind: ApplicationPermission
metadata:
name: notification-call-vault
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: notifications
appid: notifications
key: {{ .Values.os.notification.appKey }}
secret: {{ .Values.os.notification.appSecret }}
permissions:
- dataType: notification
group: service.vault
ops:
- Create
- Query
version: v1
- dataType: notification
group: service.desktop
ops:
- Create
- Query
version: v1
- dataType: secret
group: secret.infisical
ops:
- RetrieveSecret?workspace=notification
- CreateSecret?workspace=notification
- DeleteSecret?workspace=notification
- UpdateSecret?workspace=notification
- ListSecret?workspace=notification
version: v1
- dataType: app
group: service.bfl
ops:
- UserApps
version: v1
status:
state: active
# TODO: deploy a notification proxy

View File

@@ -1,4 +1,3 @@
bfl:
nodeport: 30883
nodeport_ingress_http: 30083
@@ -40,4 +39,4 @@ os:
appKey: '${ks[0]}'
appSecret: test
kubesphere:
redis_password: ""
redis_password: ""

View File

@@ -24,7 +24,7 @@ spec:
spec:
containers:
- name: rss-server
image: beclab/rsshub-server:v0.0.3
image: beclab/rsshub-server:v0.0.5
imagePullPolicy: IfNotPresent
ports:
- containerPort: 1200

View File

@@ -199,7 +199,7 @@ spec:
value: os_system_search3
containers:
- name: search3
image: beclab/search3:v0.0.24
image: beclab/search3:v0.0.30
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080

4
apps/studio/README.md Normal file
View File

@@ -0,0 +1,4 @@
# devbox
Terminus App development management tools
https://github.com/beclab/devbox

View File

@@ -1,6 +1,6 @@
apiVersion: v2
name: gpu
description: A Helm chart for Kubernetes
name: studio
description: A Terminus app development tool
maintainers:
- name: bytetrade
@@ -17,10 +17,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
version: 0.1.3
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "0.1.12"
appVersion: "4.9.1"

Binary file not shown.

After

Width:  |  Height:  |  Size: 749 KiB

View File

@@ -0,0 +1,549 @@
{{- $namespace := printf "%s%s" "user-system-" .Values.bfl.username -}}
{{- $studio_secret := (lookup "v1" "Secret" $namespace "studio-secrets") -}}
{{- $pg_password := "" -}}
{{ if $studio_secret -}}
{{ $pg_password = (index $studio_secret "data" "pg_password") }}
{{ else -}}
{{ $pg_password = randAlphaNum 16 | b64enc }}
{{- end -}}
---
apiVersion: v1
kind: Secret
metadata:
name: studio-secrets
namespace: user-system-{{ .Values.bfl.username }}
type: Opaque
data:
pg_password: {{ $pg_password }}
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: studio-pg
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: studio
appNamespace: {{ .Release.Namespace }}
middleware: postgres
postgreSQL:
user: studio_{{ .Values.bfl.username }}
password:
valueFrom:
secretKeyRef:
key: pg_password
name: studio-secrets
databases:
- name: studio
---
apiVersion: v1
kind: Service
metadata:
name: studio-server
namespace: {{ .Release.Namespace }}
spec:
selector:
app: studio-server
ports:
- protocol: TCP
port: 8080
targetPort: 8088
name: http
- protocol: TCP
port: 8083
targetPort: 8083
name: https
---
kind: Service
apiVersion: v1
metadata:
name: chartmuseum-studio
namespace: {{ .Release.Namespace }}
spec:
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8888
selector:
app: studio-server
---
apiVersion: v1
kind: ConfigMap
metadata:
name: studio-san-cnf
namespace: {{ .Release.Namespace }}
data:
san.cnf: |
[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no
[req_distinguished_name]
countryName = CN
stateOrProvinceName = Beijing
localityName = Beijing
0.organizationName = bytetrade
commonName = studio-server.{{ .Release.Namespace }}.svc
[v3_req]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @bytetrade
[bytetrade]
DNS.1 = studio-server.{{ .Release.Namespace }}.svc
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: studio-server
namespace: {{ .Release.Namespace }}
labels:
app: studio-server
applications.app.bytetrade.io/author: bytetrade.io
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: studio-server
template:
metadata:
labels:
app: studio-server
spec:
serviceAccountName: bytetrade-controller
volumes:
- name: chart
hostPath:
type: DirectoryOrCreate
path: '{{ .Values.userspace.appData}}/studio/Chart'
- name: data
hostPath:
type: DirectoryOrCreate
path: '{{ .Values.userspace.appData }}/studio/Data'
- name: storage-volume
hostPath:
path: '{{ .Values.userspace.appData }}/studio/helm-repo-dev'
type: DirectoryOrCreate
- name: config-san
configMap:
name: studio-san-cnf
items:
- key: san.cnf
path: san.cnf
- name: sidecar-configs-studio
configMap:
name: sidecar-configs-studio
items:
- key: envoy.yaml
path: envoy.yaml
- name: certs
emptyDir: {}
initContainers:
- name: init-chmod-data
image: busybox:1.28
imagePullPolicy: IfNotPresent
command:
- sh
- '-c'
- |
chown -R 1000:1000 /home/coder
chown -R 65532:65532 /charts
chown -R 65532:65532 /data
securityContext:
runAsUser: 0
resources: { }
volumeMounts:
- name: storage-volume
mountPath: /home/coder
- name: chart
mountPath: /charts
- name: data
mountPath: /data
- name: terminus-sidecar-init
image: aboveos/openservicemesh-init:v1.2.3
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- |
iptables-restore --noflush <<EOF
# sidecar interception rules
*nat
:PROXY_IN_REDIRECT - [0:0]
:PROXY_INBOUND - [0:0]
:PROXY_OUTBOUND - [0:0]
:PROXY_OUT_REDIRECT - [0:0]
-A PREROUTING -p tcp -j PROXY_INBOUND
-A OUTPUT -p tcp -j PROXY_OUTBOUND
-A PROXY_INBOUND -p tcp --dport 15000 -j RETURN
-A PROXY_INBOUND -p tcp --dport 8083 -j RETURN
-A PROXY_INBOUND -p tcp -j PROXY_IN_REDIRECT
-A PROXY_IN_REDIRECT -p tcp -j REDIRECT --to-port 15003
-A PROXY_OUTBOUND -p tcp --dport 5432 -j RETURN
-A PROXY_OUTBOUND -p tcp --dport 6379 -j RETURN
-A PROXY_OUTBOUND -p tcp --dport 27017 -j RETURN
-A PROXY_OUTBOUND -p tcp --dport 443 -j RETURN
-A PROXY_OUTBOUND -p tcp --dport 8080 -j RETURN
-A PROXY_OUTBOUND -d ${POD_IP}/32 -j RETURN
-A PROXY_OUTBOUND -o lo ! -d 127.0.0.1/32 -m owner --uid-owner 1555 -j PROXY_IN_REDIRECT
-A PROXY_OUTBOUND -o lo -m owner ! --uid-owner 1555 -j RETURN
-A PROXY_OUTBOUND -m owner --uid-owner 1555 -j RETURN
-A PROXY_OUTBOUND -d 127.0.0.1/32 -j RETURN
-A PROXY_OUTBOUND -j PROXY_OUT_REDIRECT
-A PROXY_OUT_REDIRECT -p tcp -j REDIRECT --to-port 15001
COMMIT
EOF
env:
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
securityContext:
privileged: true
capabilities:
add:
- NET_ADMIN
runAsNonRoot: false
runAsUser: 0
- name: generate-certs
image: beclab/openssl:v3
imagePullPolicy: IfNotPresent
command: [ "/bin/sh", "-c" ]
args:
- |
openssl genrsa -out /etc/certs/ca.key 2048
openssl req -new -x509 -days 3650 -key /etc/certs/ca.key -out /etc/certs/ca.crt \
-subj "/CN=bytetrade CA/O=bytetrade/C=CN"
openssl req -new -newkey rsa:2048 -nodes \
-keyout /etc/certs/server.key -out /etc/certs/server.csr \
-config /etc/san/san.cnf
openssl x509 -req -days 3650 -in /etc/certs/server.csr \
-CA /etc/certs/ca.crt -CAkey /etc/certs/ca.key \
-CAcreateserial -out /etc/certs/server.crt \
-extensions v3_req -extfile /etc/san/san.cnf
chown -R 65532 /etc/certs/*
volumeMounts:
- name: config-san
mountPath: /etc/san
- name: certs
mountPath: /etc/certs
containers:
- name: studio
image: beclab/studio-server:v0.1.50
imagePullPolicy: IfNotPresent
args:
- server
ports:
- name: port
containerPort: 8088
protocol: TCP
- name: ssl-port
containerPort: 8083
protocol: TCP
volumeMounts:
- name: chart
mountPath: /charts
- name: data
mountPath: /data
- mountPath: /etc/certs
name: certs
lifecycle:
preStop:
exec:
command:
- "/studio"
- "clean"
env:
- name: BASE_DIR
value: /charts
- name: OS_API_KEY
value: {{ .Values.os.studio.appKey }}
- name: OS_API_SECRET
value: {{ .Values.os.studio.appSecret }}
- name: OS_SYSTEM_SERVER
value: system-server.user-system-{{ .Values.bfl.username }}
- name: NAME_SPACE
value: {{ .Release.Namespace }}
- name: OWNER
value: '{{ .Values.bfl.username }}'
- name: DB_HOST
value: citus-master-svc.user-system-{{ .Values.bfl.username }}
- name: DB_USERNAME
value: studio_{{ .Values.bfl.username }}
- name: DB_PASSWORD
value: "{{ $pg_password | b64dec }}"
- name: DB_NAME
value: user_space_{{ .Values.bfl.username }}_studio
- name: DB_PORT
value: "5432"
resources:
requests:
cpu: "50m"
memory: 100Mi
limits:
cpu: "0.5"
memory: 1000Mi
- name: terminus-envoy-sidecar
image: bytetrade/envoy:v1.25.11.1
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1555
ports:
- name: proxy-admin
containerPort: 15000
- name: proxy-inbound
containerPort: 15003
- name: proxy-outbound
containerPort: 15001
resources:
requests:
cpu: "50m"
memory: 100Mi
limits:
cpu: "0.5"
memory: 200Mi
volumeMounts:
- name: sidecar-configs-studio
readOnly: true
mountPath: /etc/envoy/envoy.yaml
subPath: envoy.yaml
command:
- /usr/local/bin/envoy
- --log-level
- debug
- -c
- /etc/envoy/envoy.yaml
env:
- name: POD_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: APP_KEY
value: {{ .Values.os.studio.appKey }}
- name: APP_SECRET
value: {{ .Values.os.studio.appSecret }}
- name: chartmuseum
image: aboveos/helm-chartmuseum:v0.15.0
args:
- '--port=8888'
- '--storage-local-rootdir=/storage'
ports:
- name: http
containerPort: 8888
protocol: TCP
env:
- name: CHART_POST_FORM_FIELD_NAME
value: chart
- name: DISABLE_API
value: 'false'
- name: LOG_JSON
value: 'true'
- name: PROV_POST_FORM_FIELD_NAME
value: prov
- name: STORAGE
value: local
resources:
requests:
cpu: "50m"
memory: 100Mi
limits:
cpu: 1000m
memory: 512Mi
volumeMounts:
- name: storage-volume
mountPath: /storage
livenessProbe:
httpGet:
path: /health
port: http
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /health
port: http
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
---
apiVersion: v1
data:
envoy.yaml: |
admin:
access_log_path: "/dev/stdout"
address:
socket_address:
address: 0.0.0.0
port_value: 15000
static_resources:
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 15003
listener_filters:
- name: envoy.filters.listener.original_dst
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.listener.original_dst.v3.OriginalDst
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: desktop_http
upgrade_configs:
- upgrade_type: websocket
- upgrade_type: tailscale-control-protocol
skip_xff_append: false
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: service
domains: ["*"]
routes:
- match:
prefix: "/"
route:
cluster: original_dst
timeout: 1800s
http_protocol_options:
accept_http_10: true
http_filters:
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
- name: listener_1
address:
socket_address:
address: 0.0.0.0
port_value: 15001
listener_filters:
- name: envoy.filters.listener.original_dst
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.listener.original_dst.v3.OriginalDst
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: studio_out_http
skip_xff_append: false
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: service
domains: ["*"]
routes:
- match:
prefix: "/server/intent/send"
request_headers_to_add:
- header:
key: X-App-Key
value: {{ .Values.os.studio.appKey }}
route:
cluster: system-server
prefix_rewrite: /system-server/v2/legacy_api/api.intent/v2/server/intent/send
- match:
prefix: "/"
route:
cluster: original_dst
timeout: 1800s
typed_per_filter_config:
envoy.filters.http.lua:
"@type": type.googleapis.com/envoy.extensions.filters.http.lua.v3.LuaPerRoute
disabled: true
http_protocol_options:
accept_http_10: true
http_filters:
- name: envoy.filters.http.lua
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua
inline_code:
local sha = require("lib.sha2")
function envoy_on_request(request_handle)
local app_key = os.getenv("APP_KEY")
local app_secret = os.getenv("APP_SECRET")
local current_time = os.time()
local minute_level_time = current_time - (current_time % 60)
local time_string = tostring(minute_level_time)
local s = app_key .. app_secret .. time_string
request_handle:logInfo("originstring:" .. s)
local hash = sha.sha256(s)
request_handle:logInfo("Hello World.")
request_handle:logInfo(hash)
request_handle:headers():add("X-Auth-Signature",hash)
end
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
clusters:
- name: original_dst
connect_timeout: 5000s
type: ORIGINAL_DST
lb_policy: CLUSTER_PROVIDED
- name: system-server
connect_timeout: 2s
type: LOGICAL_DNS
dns_lookup_family: V4_ONLY
dns_refresh_rate: 600s
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: system-server
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: system-server.user-system-{{ .Values.bfl.username }}
port_value: 80
kind: ConfigMap
metadata:
name: sidecar-configs-studio
namespace: {{ .Release.Namespace }}

View File

@@ -1,4 +1,3 @@
bfl:
nodeport: 30883
nodeport_ingress_http: 30083
@@ -30,14 +29,14 @@ os:
message:
appKey: '${ks[0]}'
appSecret: test
wise:
rss:
appKey: '${ks[0]}'
appSecret: test
search:
appKey: '${ks[0]}'
appSecret: test
search2:
studio:
appKey: '${ks[0]}'
appSecret: test
kubesphere:
redis_password: ""
redis_password: ""

View File

@@ -22,7 +22,7 @@ spec:
spec:
containers:
- name: monitoring-server
image: beclab/monitoring-server-v1:v0.2.3
image: beclab/monitoring-server-v1:v0.2.5
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000

View File

@@ -109,6 +109,19 @@ spec:
port: 3010
targetPort: 3010
---
apiVersion: v1
kind: Service
metadata:
name: studio-svc
namespace: {{ .Release.Namespace }}
spec:
selector:
app: system-frontend
ports:
- protocol: TCP
port: 8080
targetPort: 87
---
apiVersion: apps/v1
kind: Deployment
metadata:
@@ -121,11 +134,11 @@ metadata:
applications.app.bytetrade.io/group: 'true'
applications.app.bytetrade.io/author: bytetrade.io
annotations:
applications.app.bytetrade.io/icon: '{"dashboard":"https://file.bttcdn.com/appstore/dashboard/icon.png","control-hub":"https://file.bttcdn.com/appstore/control-hub/icon.png","profile":"https://file.bttcdn.com/appstore/profile/icon.png","wise":"https://file.bttcdn.com/appstore/rss/icon.png","headscale": "https://file.bttcdn.com/appstore/headscale/icon.png","settings": "https://file.bttcdn.com/appstore/settings/icon.png"}'
applications.app.bytetrade.io/title: '{"dashboard": "Dashboard","control-hub":"Control Hub","profile":"Profile","wise":"Wise","headscale":"Headscale","settings":"Settings"}'
applications.app.bytetrade.io/version: '{"dashboard": "0.0.1","control-hub":"0.0.1","profile":"0.0.1","wise":"0.0.1","headscale":"0.0.1","settings":"0.0.1"}'
applications.app.bytetrade.io/icon: '{"dashboard":"https://file.bttcdn.com/appstore/dashboard/icon.png","control-hub":"https://file.bttcdn.com/appstore/control-hub/icon.png","profile":"https://file.bttcdn.com/appstore/profile/icon.png","wise":"https://file.bttcdn.com/appstore/rss/icon.png","headscale": "https://file.bttcdn.com/appstore/headscale/icon.png","settings": "https://file.bttcdn.com/appstore/settings/icon.png","studio":"https://file.bttcdn.com/appstore/devbox/icon.png"}'
applications.app.bytetrade.io/title: '{"dashboard": "Dashboard","control-hub":"Control Hub","profile":"Profile","wise":"Wise","headscale":"Headscale","settings":"Settings","studio":"Studio"}'
applications.app.bytetrade.io/version: '{"dashboard": "0.0.1","control-hub":"0.0.1","profile":"0.0.1","wise":"0.0.1","headscale":"0.0.1","settings":"0.0.1","studio":"0.0.1"}'
applications.app.bytetrade.io/policies: '{"dashboard":{"policies":[{"entranceName":"dashboard","uriRegex":"/js/script.js", "level":"public"},{"entranceName":"dashboard","uriRegex":"/js/api/send", "level":"public"}]}}'
applications.app.bytetrade.io/entrances: '{"dashboard":[{"name":"dashboard","host":"dashboard-service","port":80,"title":"Dashboard","windowPushState":true}],"control-hub":[{"name":"control-hub","host":"control-hub-service","port":80,"title":"Control Hub","windowPushState":true}],"profile":[{"name":"profile", "host":"profile-service", "port":80,"title":"Profile","windowPushState":true}],"wise":[{"name":"wise", "host":"wise-svc", "port":80,"title":"Wise","windowPushState":true}],"headscale":[{"name":"headscale", "host":"headscale-svc", "port":80,"title":"Headscale","invisible": true}],"settings":[{"name":"settings", "host":"settings-service", "port":80,"title":"Settings"}]}'
applications.app.bytetrade.io/entrances: '{"dashboard":[{"name":"dashboard","host":"dashboard-service","port":80,"title":"Dashboard","windowPushState":true}],"control-hub":[{"name":"control-hub","host":"control-hub-service","port":80,"title":"Control Hub","windowPushState":true}],"profile":[{"name":"profile", "host":"profile-service", "port":80,"title":"Profile","windowPushState":true}],"wise":[{"name":"wise", "host":"wise-svc", "port":80,"title":"Wise","windowPushState":true}],"headscale":[{"name":"headscale", "host":"headscale-svc", "port":80,"title":"Headscale","invisible": true}],"settings":[{"name":"settings", "host":"settings-service", "port":80,"title":"Settings"}],"studio":[{"name":"studio","host":"studio-svc","port":8080,"title":"Studio","openMethod":"window"}]}'
spec:
replicas: 1
selector:
@@ -136,7 +149,13 @@ spec:
labels:
app: system-frontend
io.bytetrade.app: "true"
annotations:
instrumentation.opentelemetry.io/inject-nodejs: "olares-instrumentation"
instrumentation.opentelemetry.io/nodejs-container-names: "settings-server"
instrumentation.opentelemetry.io/inject-nginx: "olares-instrumentation"
instrumentation.opentelemetry.io/inject-nginx-container-names: "system-frontend"
spec:
priorityClassName: "system-cluster-critical"
initContainers:
- args:
- -it
@@ -177,7 +196,7 @@ spec:
apiVersion: v1
fieldPath: status.podIP
- name: dashboard-init
image: beclab/dashboard-frontend-v1:v0.4.7
image: beclab/dashboard-frontend-v1:v0.4.9
imagePullPolicy: IfNotPresent
command:
- /bin/sh
@@ -189,7 +208,7 @@ spec:
- mountPath: /www
name: www-dir
- name: control-hub-init
image: beclab/admin-console-frontend-v1:v0.4.9
image: beclab/admin-console-frontend-v1:v0.5.8
imagePullPolicy: IfNotPresent
command:
- /bin/sh
@@ -201,7 +220,7 @@ spec:
- mountPath: /www
name: www-dir
- name: profile-editor-init
image: beclab/profile-editor:v0.2.1
image: beclab/profile-editor:v0.2.21
imagePullPolicy: IfNotPresent
command:
- /bin/sh
@@ -213,7 +232,7 @@ spec:
- mountPath: /www
name: www-dir
- name: profile-preview-init
image: beclab/profile-preview:v0.2.1
image: beclab/profile-preview:v0.2.21
imagePullPolicy: IfNotPresent
command:
- /bin/sh
@@ -225,7 +244,7 @@ spec:
- mountPath: /www
name: www-dir
- name: wise-init
image: beclab/wise:v1.3.9
image: beclab/wise:v1.3.55
imagePullPolicy: IfNotPresent
command:
- /bin/sh
@@ -237,7 +256,7 @@ spec:
- mountPath: /www
name: www-dir
- name: settings-init
image: beclab/settings:v0.2.1
image: beclab/settings:v1.3.62
imagePullPolicy: IfNotPresent
command:
- /bin/sh
@@ -248,6 +267,18 @@ spec:
volumeMounts:
- mountPath: /www
name: www-dir
- name: studio-init
image: beclab/studio:v0.2.16
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- |
mkdir -p /www/studio
cp -r /app/* /www/studio
volumeMounts:
- mountPath: /www
name: www-dir
containers:
- name: terminus-envoy-sidecar
image: bytetrade/envoy:v1.25.11
@@ -274,7 +305,7 @@ spec:
- -c
- /etc/envoy/envoy.yaml
- name: system-frontend
image: beclab/docker-nginx-headers-more:v0.1.0
image: beclab/docker-nginx-headers-more:ubuntu-v0.1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 81
@@ -298,7 +329,7 @@ spec:
- name: www-dir
mountPath: /www
- name: wise-download-dir
mountPath: /data/Home/Downloads
mountPath: /data/Home
- name: system-frontend-nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
@@ -320,6 +351,9 @@ spec:
- name: system-frontend-nginx-config
mountPath: /etc/nginx/conf.d/settings.conf
subPath: settings.conf
- name: system-frontend-nginx-config
mountPath: /etc/nginx/conf.d/studio.conf
subPath: studio.conf
env:
- name: POD_UID
valueFrom:
@@ -338,7 +372,7 @@ spec:
fieldRef:
fieldPath: status.podIP
- name: terminus-ws-sidecar
image: 'beclab/ws-gateway:v1.0.4'
image: 'beclab/ws-gateway:v1.0.5'
imagePullPolicy: IfNotPresent
command:
- /ws-gateway
@@ -351,7 +385,7 @@ spec:
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
- name: settings-server
image: beclab/settings-server:v0.2.0
image: beclab/settings-server:v0.2.23
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
@@ -391,7 +425,7 @@ spec:
- name: userspace-dir
hostPath:
type: Directory
path: {{ .Values.userspace.userData }}
path: '{{ .Values.userspace.userData }}'
- name: terminus-sidecar-config
configMap:
name: sidecar-configs
@@ -403,7 +437,7 @@ spec:
- name: wise-download-dir
hostPath:
type: Directory
path: {{ .Values.userspace.userData }}/Downloads
path: '{{ .Values.userspace.userData }}'
- name: system-frontend-nginx-config
configMap:
name: system-frontend-nginx-config
@@ -422,6 +456,8 @@ spec:
path: headscale.conf
- key: settings.conf
path: settings.conf
- key: studio.conf
path: studio.conf
---
@@ -477,6 +513,31 @@ status:
---
apiVersion: sys.bytetrade.io/v1alpha1
kind: ApplicationPermission
metadata:
name: studio
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: studio
appid: studio
key: {{ .Values.os.studio.appKey }}
secret: {{ .Values.os.studio.appSecret }}
permissions:
- dataType: app
group: service.appstore
ops:
- InstallDevApp
- UninstallDevApp
version: v1
- dataType: legacy_api
group: api.intent
ops:
- POST
version: v2
status:
state: active
---
apiVersion: sys.bytetrade.io/v1alpha1
kind: ApplicationPermission
metadata:
name: settings
namespace: user-system-{{ .Values.bfl.username }}
@@ -612,6 +673,16 @@ metadata:
namespace: user-system-{{ .Values.bfl.username }}
spec:
callbacks:
- filters:
type:
- backup-state-event
op: Create
uri: /api/event/backup_state_event
- filters:
type:
- restore-state-event
op: Create
uri: /api/event/restore_state_event
- filters:
type:
- app-installation-event
@@ -622,6 +693,11 @@ spec:
- settings-event
op: Create
uri: /api/event/app_installation_event
- filters:
type:
- entrance-state-event
op: Create
uri: /api/event/entrance_state_event
- filters:
type:
- system-upgrade-event
@@ -748,6 +824,10 @@ data:
server anayltic2-server.os-system:3010;
}
upstream HamiServer {
server hami-webui.kube-system:3000;
}
server {
listen 81;
gzip off;
@@ -766,6 +846,14 @@ data:
expires 0;
}
location /ws {
proxy_pass http://127.0.0.1:40010;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
location /bfl {
add_header 'Access-Control-Allow-Headers' 'x-api-nonce,x-api-ts,x-api-ver,x-api-source';
proxy_pass http://bfl;
@@ -780,6 +868,18 @@ data:
proxy_pass http://SettingsServer;
}
location /hami/ {
proxy_pass http://HamiServer/;
}
location /api/profile/init {
proxy_pass http://127.0.0.1:3010;
proxy_set_header Host $host;
proxy_set_header X-real-ip $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /api {
proxy_pass http://SettingsServer;
}
@@ -1013,7 +1113,7 @@ data:
}
wise.conf: |-
upstream KnowledgeServer {
server rss-svc:3010;
server rss-svc.os-system:3010;
}
upstream RSSServer {
@@ -1021,7 +1121,7 @@ data:
}
upstream ArgoworkflowsSever {
server argoworkflows-svc:2746;
server argoworkflows-svc.os-system:2746;
}
server {
@@ -1049,7 +1149,7 @@ data:
}
location /ws {
proxy_pass http://rss-svc:40010;
proxy_pass http://rss-svc.os-system:40010;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
@@ -1088,9 +1188,9 @@ data:
proxy_pass http://ArgoworkflowsSever;
}
location ~ ^/download/preview/Downloads/(.*)$
location ~ ^/download/preview/(.*)$
{
alias /data/Home/Downloads/$1;
alias /data/Home/$1;
}
location /videos/ {
@@ -1111,6 +1211,44 @@ data:
proxy_pass http://media-server-service.os-system:9090;
}
location /api {
proxy_pass http://files-service.os-system:80;
# rewrite ^/server(.*)$ $1 break;
# Add original-request-related headers
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
add_header Accept-Ranges bytes;
client_body_timeout 600s;
client_max_body_size 4000M;
proxy_request_buffering off;
keepalive_timeout 750s;
proxy_read_timeout 600s;
proxy_send_timeout 600s;
}
location /upload {
proxy_pass http://files-service.os-system:80;
# rewrite ^/server(.*)$ $1 break;
# Add original-request-related headers
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
add_header Accept-Ranges bytes;
client_body_timeout 600s;
client_max_body_size 4000M;
proxy_request_buffering off;
keepalive_timeout 750s;
proxy_read_timeout 600s;
proxy_send_timeout 600s;
}
# # files
# # for all routes matching a dot, check for files and return 404 if not found
# # e.g. /file.js returns a 404 if not found
@@ -1155,8 +1293,8 @@ data:
server infisical-service:8080;
}
upstream NotificationServer {
server notifications-server;
upstream BackupServer {
server backup-server.os-system:8082;
}
server {
@@ -1216,6 +1354,31 @@ data:
proxy_set_header X-Forwarded-Host $host;
}
location /apis/backup {
proxy_pass http://backup-server.os-system:8082;
add_header Accept "application/json, text/plain, */*";
add_header Content-Type "application/json; charset=utf-8";
}
location /api/resources {
proxy_pass http://files-service.os-system:80;
# rewrite ^/server(.*)$ $1 break;
# Add original-request-related headers
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
add_header Accept-Ranges bytes;
client_body_timeout 600s;
client_max_body_size 4000M;
proxy_request_buffering off;
keepalive_timeout 750s;
proxy_read_timeout 600s;
proxy_send_timeout 600s;
}
location /drive {
proxy_pass http://127.0.0.1:8080;
@@ -1254,11 +1417,193 @@ data:
proxy_set_header X-Forwarded-Host $host;
}
location /notification {
proxy_pass http://NotificationServer;
}
location ~.*\.(js|css|png|jpg|svg|woff|woff2)$ {
add_header Cache-Control "public, max-age=2678400";
}
}
studio.conf: |-
upstream SettingsServerStudio {
server monitoring-server.os-system;
}
upstream MiddlewareStudio {
server middleware-service.os-system;
}
upstream AnalyticsStudio {
server anayltic2-server.os-system:3010;
}
server {
listen 87;
# Gzip Settings
gzip off;
gzip_disable "msie6";
gzip_min_length 1k;
gzip_buffers 16 64k;
gzip_http_version 1.1;
gzip_comp_level 6;
gzip_types *;
root /www/studio;
location / {
try_files $uri $uri/index.html /index.html;
add_header Cache-Control "private,no-cache";
add_header Last-Modified "Oct, 03 Jan 2022 13:46:41 GMT";
expires 0;
}
location /api/command {
proxy_pass http://studio-server:8080;
proxy_set_header Host $http_host;
proxy_set_header X-real-ip $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Accept-Encoding gzip;
proxy_read_timeout 180;
}
location /api/apps {
proxy_pass http://studio-server:8080;
proxy_set_header Host $http_host;
proxy_set_header X-real-ip $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Accept-Encoding gzip;
proxy_read_timeout 180;
}
location /api/app-cfg {
proxy_pass http://studio-server:8080;
proxy_set_header Host $http_host;
proxy_set_header X-real-ip $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Accept-Encoding gzip;
proxy_read_timeout 180;
}
location /api/app-state {
proxy_pass http://studio-server:8080;
proxy_set_header Host $http_host;
proxy_set_header X-real-ip $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Accept-Encoding gzip;
proxy_read_timeout 180;
}
location /api/app-status {
proxy_pass http://studio-server:8080;
proxy_set_header Host $http_host;
proxy_set_header X-real-ip $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Accept-Encoding gzip;
proxy_read_timeout 180;
}
location /api/list-my-containers {
proxy_pass http://studio-server:8080;
proxy_set_header Host $http_host;
proxy_set_header X-real-ip $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Accept-Encoding gzip;
proxy_read_timeout 180;
}
location /api/files {
proxy_pass http://studio-server:8080;
proxy_set_header Host $http_host;
proxy_set_header X-real-ip $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Accept-Encoding gzip;
proxy_read_timeout 180;
}
location /ws {
proxy_pass http://127.0.0.1:40010;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
location /bfl {
add_header 'Access-Control-Allow-Headers' 'x-api-nonce,x-api-ts,x-api-ver,x-api-source';
proxy_pass http://bfl;
proxy_set_header Host $host;
proxy_set_header X-real-ip $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header X-Frame-Options SAMEORIGIN;
}
location /kapis {
proxy_pass http://SettingsServerStudio;
}
location /api/profile/init {
proxy_pass http://127.0.0.1:3010;
proxy_set_header Host $host;
proxy_set_header X-real-ip $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /api {
proxy_pass http://SettingsServerStudio;
}
location /capi {
proxy_pass http://SettingsServerStudio;
proxy_set_header Host $host;
proxy_set_header X-real-ip $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location = /js/api/send {
proxy_pass http://AnalyticsStudio;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
rewrite ^/js(.*)$ $1 break;
}
location /analytics_service {
proxy_pass http://AnalyticsStudio;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
rewrite ^/analytics_service(.*)$ $1 break;
}
location ~ /(kapis/terminal|api/v1/watch|apis/apps/v1/watch) {
proxy_pass http://SettingsServerStudio;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
}
location = /js/script.js {
add_header Access-Control-Allow-Origin "*";
}
location ~.*\.(js|css|png|jpg|svg|woff|woff2)$ {
add_header Cache-Control "public, max-age=2678400";
}
}

View File

@@ -1,4 +1,3 @@
bfl:
nodeport: 30883
nodeport_ingress_http: 30083
@@ -18,10 +17,10 @@ docs:
desktop:
nodeport: 30180
os:
portfolio:
profile:
appKey: '${ks[0]}'
appSecret: test
vault:
studio:
appKey: '${ks[0]}'
appSecret: test
desktop:
@@ -39,5 +38,11 @@ os:
search2:
appKey: '${ks[0]}'
appSecret: test
settings:
appKey: '${ks[0]}'
appSecret: test
dashboard:
appKey: '${ks[0]}'
appSecret: test
kubesphere:
redis_password: ""
redis_password: ""

View File

@@ -83,7 +83,7 @@ spec:
value: os_system_vault
containers:
- name: vault-server
image: beclab/vault-server:v1.3.9
image: beclab/vault-server:v1.3.55
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
@@ -114,7 +114,7 @@ spec:
- name: vault-attach
mountPath: /padloc/packages/server/attachments
- name: vault-admin
image: beclab/vault-admin:v1.3.9
image: beclab/vault-admin:v1.3.55
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3010
@@ -135,11 +135,11 @@ spec:
- name: vault-data
hostPath:
type: DirectoryOrCreate
path: {{ $vault_rootpath }}/data
path: '{{ $vault_rootpath }}/data'
- name: vault-attach
hostPath:
type: DirectoryOrCreate
path: {{ $vault_rootpath }}/attachments
path: '{{ $vault_rootpath }}/attachments'
---
apiVersion: v1
kind: Service

View File

@@ -1,3 +1,13 @@
{{- $namespace := printf "%s%s" "user-system-" .Values.bfl.username -}}
{{- $vault_nats_secret := (lookup "v1" "Secret" $namespace "vault-nats-secrets") -}}
{{- $vault_nats_password := "" -}}
{{ if $vault_nats_secret -}}
{{ $vault_nats_password = (index $vault_nats_secret "data" "vault_nats_password") }}
{{ else -}}
{{ $vault_nats_password = randAlphaNum 16 | b64enc }}
{{- end -}}
---
@@ -36,6 +46,12 @@ spec:
image: owncloudci/wait-for:latest
imagePullPolicy: IfNotPresent
name: check-auth
- args:
- -it
- nats.user-system-{{ .Values.bfl.username }}:4222
image: owncloudci/wait-for:latest
imagePullPolicy: IfNotPresent
name: check-nats
- name: terminus-sidecar-init
image: openservicemesh/init:v1.2.3
imagePullPolicy: IfNotPresent
@@ -72,13 +88,13 @@ spec:
containers:
- name: vault-frontend
image: beclab/vault-frontend:v1.3.9
image: beclab/vault-frontend:v1.3.55
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
- name: notification-server
image: beclab/vault-notification:v1.3.9
image: beclab/vault-notification:v1.3.55
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3010
@@ -93,6 +109,17 @@ spec:
value: '{{ .Values.os.vault.appSecret }}'
- name: OS_APP_KEY
value: {{ .Values.os.vault.appKey }}
- name: NATS_HOST
value: nats.user-system-{{ .Values.bfl.username }}
- name: NATS_PORT
value: '4222'
- name: NATS_USERNAME
value: user-system-{{ .Values.bfl.username }}-vault
- name: NATS_PASSWORD
value: {{ $vault_nats_password | b64dec }}
- name: NATS_SUBJECT
value: terminus.os-system.files-notify
- name: terminus-envoy-sidecar
image: bytetrade/envoy:v1.25.11
@@ -238,3 +265,38 @@ spec:
version: v1
status:
state: active
---
apiVersion: v1
kind: Secret
metadata:
name: vault-nats-secrets
namespace: user-system-{{ .Values.bfl.username }}
data:
vault_nats_password: {{ $vault_nats_password }}
type: Opaque
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: vault-nat
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: vault
appNamespace: user-space-{{ .Values.bfl.username }}
middleware: nats
nats:
password:
valueFrom:
secretKeyRef:
key: vault_nats_password
name: vault-nats-secrets
refs:
- appName: files-server
appNamespace: os-system
subjects:
- name: files-notify
perm:
- pub
- sub
user: user-system-{{ .Values.bfl.username }}-vault

View File

@@ -1,4 +1,3 @@
bfl:
nodeport: 30883
nodeport_ingress_http: 30083
@@ -40,4 +39,4 @@ os:
appKey: '${ks[0]}'
appSecret: test
kubesphere:
redis_password: ""
redis_password: ""

View File

@@ -61,7 +61,7 @@ spec:
containers:
- name: wizard
image: beclab/wizard:v0.5.12
image: beclab/wizard:v1.3.57
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
@@ -132,7 +132,7 @@ spec:
- name: userspace-dir
hostPath:
type: Directory
path: {{ .Values.userspace.userData }}
path: "{{ .Values.userspace.userData }}"
# - name: terminus-sidecar-config
# configMap:
# name: sidecar-configs

View File

@@ -1,4 +1,3 @@
bfl:
username: 'test'
url: 'test'

View File

@@ -28,6 +28,8 @@ spec:
spec:
runtimeClassName: nvidia # Explicitly request the runtime
priorityClassName: system-node-critical
nodeSelector:
gpu.bytetrade.io/cuda-supported: 'true'
initContainers:
- name: init-dir
image: busybox:1.28

View File

@@ -44,6 +44,8 @@ spec:
# be rescheduled after a failure.
# See https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/
priorityClassName: "system-node-critical"
nodeSelector:
gpu.bytetrade.io/cuda-supported: 'true'
containers:
- image: nvcr.io/nvidia/k8s-device-plugin:v0.16.1
name: nvidia-device-plugin-ctr

View File

@@ -26,8 +26,9 @@ spec:
labels:
name: nvshare-scheduler
spec:
runtimeClassName: nvidia # Explicitly request the runtime
priorityClassName: system-node-critical
nodeSelector:
gpu.bytetrade.io/cuda-supported: 'true'
initContainers:
- name: init-dir
image: busybox:1.28

View File

@@ -1,6 +1,8 @@
$currentPath = Get-Location
$architecture = $env:PROCESSOR_ARCHITECTURE
$downloadCdnUrlFromEnv = $env:DOWNLOAD_CDN_URL
$version = "#__VERSION__"
$downloadUrl = "https://dc3p1870nn3cj.cloudfront.net"
function Test-Wait {
while ($true) {
@@ -8,42 +10,78 @@ function Test-Wait {
}
}
$runAsAdmin = New-Object Security.Principal.WindowsPrincipal([Security.Principal.WindowsIdentity]::GetCurrent())
if (-not $runAsAdmin.IsInRole([Security.Principal.WindowsBuiltInRole]::Administrator)) {
Write-Host "`n`nThe installation script needs to be run as an administrator.`n"
Write-Host "Please try the following methods:`n"
Write-Host "1. Search for 'PowerShell' in the Start menu, right-click it, and select 'Run as administrator'. "
Write-Host " Navigate to the directory where the installation script is located and run the installation script.`n"
Write-Host "2. Press Win + R, type 'powershell', and then press Ctrl + Shift + Enter. "
Write-Host " Navigate to the directory where the installation script is located and run the installation script.`n"
Write-Host "`nPress Ctrl+C to exit.`n"
Test-Wait
}
$process = Get-Process -Name olares-cli -ErrorAction SilentlyContinue
if ($process) {
Write-Host "olares-cli.exe is running, Press Ctrl+C to exit."
Test-Wait
}
$distro = wsl --list | Select-String -Pattern "^Ubuntu$"
if (-not $distro -eq "") {
Write-Host "Distro Olares exists, please unregister it first."
exit 1
}
$arch = "amd64"
if ($architecture -like "ARM") {
$arch = "arm64"
}
$CLI_VERSION = "0.1.84"
if (-Not $downloadCdnUrlFromEnv -eq "") {
$downloadUrl = $downloadCdnUrlFromEnv
}
$CLI_PROGRAM_PATH = "{0}\" -f $currentPath
if (-Not (Test-Path $CLI_PROGRAM_PATH)) {
New-Item -Path $CLI_PROGRAM_PATH -ItemType Directory
}
$CLI_VERSION = "0.2.35"
$CLI_FILE = "olares-cli-v{0}_windows_{1}.tar.gz" -f $CLI_VERSION, $arch
$CLI_URL = "https://dc3p1870nn3cj.cloudfront.net/{0}" -f $CLI_FILE
$CLI_PATH = "{0}\{1}" -f $currentPath, $CLI_FILE
if (-Not (Test-Path $CLI_FILE)) {
$CLI_URL = "{0}/{1}" -f $downloadUrl, $CLI_FILE
$CLI_PATH = "{0}{1}" -f $CLI_PROGRAM_PATH, $CLI_FILE
$download = 0
if (Test-Path $CLI_PATH) {
tar -xzf $CLI_PATH -C $CLI_PROGRAM_PATH *> $null
if (-Not ($LASTEXITCODE -eq 0)) {
Remove-Item -Path $CLI_PATH
$download = 1
}
} else {
$download = 1
}
if ($download -eq 1) {
curl -Uri $CLI_URL -OutFile $CLI_PATH
Write-Host "Downloading olares-cli.exe..."
if (-Not (Test-Path $CLI_PATH)) {
Write-Host "Download olares-cli.exe failed."
exit 1
}
tar -xzf $CLI_PATH -C $CLI_PROGRAM_PATH *> $null
$cliPath = "{0}\olares-cli.exe" -f $CLI_PROGRAM_PATH
if ( -Not (Test-Path $cliPath)) {
Write-Host "olares-cli.exe not found."
exit 1
}
}
if (-Not (Test-Path $CLI_PATH)) {
Write-Host "Download olares-cli.exe failed."
exit 1
}
tar -xf $CLI_PATH
$cliPath = "{0}\olares-cli.exe" -f $currentPath
if ( -Not (Test-Path $cliPath)) {
Write-Host "olares-cli.exe not found."
exit 1
}
wsl --unregister Ubuntu *> $null
Start-Sleep -Seconds 3
Write-Host ("Preparing to start the installation of Olares {0}. Depending on your network conditions, this process may take several minutes." -f $version)
$command = "{0} olares install --version {1}" -f $cliPath, $version
$command = "{0}\olares-cli.exe install --version {1}" -f $CLI_PROGRAM_PATH, $version
Start-Process cmd -ArgumentList '/k',$command -Wait -Verb RunAs

View File

@@ -74,7 +74,7 @@ if [ -z ${cdn_url} ]; then
cdn_url="https://dc3p1870nn3cj.cloudfront.net"
fi
CLI_VERSION="0.1.85"
CLI_VERSION="0.2.35"
CLI_FILE="olares-cli-v${CLI_VERSION}_linux_${ARCH}.tar.gz"
if [[ x"$os_type" == x"Darwin" ]]; then
CLI_FILE="olares-cli-v${CLI_VERSION}_darwin_${ARCH}.tar.gz"
@@ -137,16 +137,22 @@ else
echo ""
else
echo "building local release ..."
$sh_c "$INSTALL_OLARES_CLI olares release $PARAMS $CDN"
$sh_c "$INSTALL_OLARES_CLI release $PARAMS $CDN"
if [[ $? -ne 0 ]]; then
echo "error: failed to build local release"
exit 1
fi
fi
else
echo "running system prechecks ..."
echo ""
$sh_c "$INSTALL_OLARES_CLI precheck $PARAMS"
if [[ $? -ne 0 ]]; then
exit 1
fi
echo "downloading installation wizard..."
echo ""
$sh_c "$INSTALL_OLARES_CLI olares download wizard $PARAMS $KUBE_PARAM $CDN"
$sh_c "$INSTALL_OLARES_CLI download wizard $PARAMS $KUBE_PARAM $CDN"
if [[ $? -ne 0 ]]; then
echo "error: failed to download installation wizard"
exit 1
@@ -155,7 +161,7 @@ else
echo "downloading installation packages..."
echo ""
$sh_c "$INSTALL_OLARES_CLI olares download component $PARAMS $KUBE_PARAM $CDN"
$sh_c "$INSTALL_OLARES_CLI download component $PARAMS $KUBE_PARAM $CDN"
if [[ $? -ne 0 ]]; then
echo "error: failed to download installation packages"
exit 1
@@ -167,7 +173,7 @@ else
if [ x"$REGISTRY_MIRRORS" != x"" ]; then
extra="--registry-mirrors $REGISTRY_MIRRORS"
fi
$sh_c "$INSTALL_OLARES_CLI olares prepare $PARAMS $KUBE_PARAM $extra"
$sh_c "$INSTALL_OLARES_CLI prepare $PARAMS $KUBE_PARAM $extra"
if [[ $? -ne 0 ]]; then
echo "error: failed to prepare installation environment"
exit 1
@@ -192,15 +198,30 @@ if [[ "$JUICEFS" == "1" ]]; then
else
echo "checking storage config ..."
fi
$sh_c "$INSTALL_OLARES_CLI olares install storage $PARAMS"
$sh_c "$INSTALL_OLARES_CLI install storage $PARAMS"
if [[ $? -ne 0 ]]; then
exit 1
fi
fi
if [[ -n "$SWAPPINESS" ]]; then
swapflag="$swapflag --swappiness $SWAPPINESS"
fi
if [[ "$ENABLE_POD_SWAP" == "1" ]]; then
swapflag="$swapflag --enable-pod-swap"
fi
if [[ "$ENABLE_ZRAM" == "1" ]]; then
swapflag="$swapflag --enable-zram"
fi
if [[ -n "$ZRAM_SIZE" ]]; then
swapflag="$swapflag --zram-size $ZRAM_SIZE"
fi
if [[ -n "$ZRAM_SWAP_PRIORITY" ]]; then
swapflag="$swapflag --zram-swap-priority $ZRAM_SWAP_PRIORITY"
fi
echo "installing Olares..."
echo ""
$sh_c "$INSTALL_OLARES_CLI olares install $PARAMS $KUBE_PARAM $fsflag"
$sh_c "$INSTALL_OLARES_CLI install $PARAMS $KUBE_PARAM $fsflag $swapflag"
if [[ $? -ne 0 ]]; then
echo "error: failed to install Olares"

261
build/installer/joincluster.sh Executable file
View File

@@ -0,0 +1,261 @@
#!/usr/bin/env bash
set -o pipefail
set -e
function command_exists() {
command -v "$@" > /dev/null 2>&1
}
function read_tty() {
echo -n $1
read $2 < /dev/tty
}
function confirm() {
if [[ "$QUIET" == "1" ]]; then
return 0
fi
answer=""
while :; do
read_tty "Do you confirm to continue? (y/n): " answer
if [[ "$answer" != "y" && "$answer" != "n" ]]; then
echo "Please input the letter y or n"
continue
fi
if [[ "$answer" == "y" ]]; then
return 0
fi
if [[ "$answer" == "n" ]]; then
exit 0
fi
done
}
function validate_ip() {
if [[ ! "$1" ]]; then
echo "invalid IP: empty address"
return 1
elif [[ ! $1 =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "invalid IP: illegal format"
return 1
elif [[ $1 =~ ^127 ]]; then
echo "invalid IP: loopback address"
return 1
else
return 0
fi
}
MASTER_SSH_OPTIONS=""
function add_master_host_ssh_options() {
MASTER_SSH_OPTIONS="$MASTER_SSH_OPTIONS --$1 $2"
}
function set_master_host_ssh_options() {
master_host="$MASTER_HOST"
if [[ ! "$master_host" ]]; then
read_tty "Enter the master node's IP: " master_host
fi
while :; do
if ! validate_ip "$master_host"; then
read_tty "Enter the master node's IP: " master_host
else
break
fi
done
add_master_host_ssh_options master-host "$master_host"
if [[ "$MASTER_NODE_NAME" ]]; then
add_master_host_ssh_options master-node-name "$MASTER_NODE_NAME"
fi
if [[ "$MASTER_SSH_USER" ]]; then
add_master_host_ssh_options master-ssh-user "$MASTER_SSH_USER"
else
echo "the environment variable \$MASTER_SSH_USER is not set"
echo "the default remote user \"root\" on the master node will be used to authenticate"
echo "if this is unexpected, please set it explicitly"
confirm
fi
if [[ "$MASTER_SSH_PASSWORD" ]]; then
add_master_host_ssh_options master-ssh-password "$MASTER_SSH_PASSWORD"
fi
if [[ "$MASTER_SSH_PRIVATE_KEY_PATH" ]]; then
add_master_host_ssh_options master-ssh-private-key-path "$MASTER_SSH_PRIVATE_KEY_PATH"
elif [[ ! "$MASTER_SSH_PASSWORD" ]]; then
echo "the environment variable \$MASTER_SSH_PRIVATE_KEY_PATH is not set"
echo "the default key in the local path /root/.ssh/id_rsa will be used to authenticate to the master"
echo "please make sure the key exists and the public key has already been added to the master node"
echo "if this is unexpected, please set it explicitly"
confirm
fi
if [[ "$MASTER_SSH_PORT" ]]; then
add_master_host_ssh_options master-ssh-port "$MASTER_SSH_PORT"
fi
}
function getmasterinfo() {
$sh_c "$INSTALL_OLARES_CLI node masterinfo $MASTER_SSH_OPTIONS" | tee /proc/$$/fd/1
if [[ $? -ne 0 ]]; then
exit 1
fi
echo "" > /proc/$$/fd/1
}
# check os type and arch
os_type=$(uname -s)
os_arch=$(uname -m)
case "$os_arch" in
arm64) ARCH=arm64; ;;
x86_64) ARCH=amd64; ;;
armv7l) ARCH=arm; ;;
aarch64) ARCH=arm64; ;;
ppc64le) ARCH=ppc64le; ;;
s390x) ARCH=s390x; ;;
*) echo "error: unsupported arch \"$os_arch\"";
exit 1; ;;
esac
if [[ "$os_type" != "Linux" ]]; then
echo "error: only Linux machine can be added to the cluster"
exit 1
fi
# set shell execute command
user="$(id -un 2>/dev/null || true)"
sh_c='sh -c'
if [ "$user" != 'root' ]; then
if ! command_exists sudo; then
echo "error: the ability to run as root is needed, but the command \"sudo\" can not be found"
exit 1
fi
sh_c='sudo -E sh -c'
fi
if ! command_exists tar; then
echo "error: the \"tar\" command is needed to unpack installation files, but can not be found"
exit 1
fi
BASE_DIR="$HOME/.olares"
if [ ! -d $BASE_DIR ]; then
mkdir -p $BASE_DIR
fi
cdn_url=${DOWNLOAD_CDN_URL}
if [[ -z "${cdn_url}" ]]; then
cdn_url="https://dc3p1870nn3cj.cloudfront.net"
fi
set_master_host_ssh_options
CLI_VERSION="0.2.35"
CLI_FILE="olares-cli-v${CLI_VERSION}_linux_${ARCH}.tar.gz"
if command_exists olares-cli && [[ "$(olares-cli -v | awk '{print $3}')" == "$CLI_VERSION" ]]; then
INSTALL_OLARES_CLI=$(which olares-cli)
echo "olares-cli already installed and is the expected version"
echo ""
else
if [[ ! -f ${CLI_FILE} ]]; then
CLI_URL="${cdn_url}/${CLI_FILE}"
echo "downloading Olares installer from ${CLI_URL} ..."
echo ""
curl -Lo ${CLI_FILE} ${CLI_URL}
if [[ $? -ne 0 ]]; then
echo "error: failed to download Olares installer"
exit 1
else
echo "Olares installer ${CLI_VERSION} download complete!"
echo ""
fi
fi
INSTALL_OLARES_CLI="/usr/local/bin/olares-cli"
echo "unpacking Olares installer to $INSTALL_OLARES_CLI..."
echo ""
tar -zxf ${CLI_FILE} olares-cli && chmod +x olares-cli
$sh_c "mv olares-cli $INSTALL_OLARES_CLI"
if [[ $? -ne 0 ]]; then
echo "error: failed to unpack Olares installer"
exit 1
fi
fi
echo "getting master info and checking current machine's eligibility to join the cluster"
echo ""
master_olares_version="$( getmasterinfo | grep OlaresVersion | awk '{print $2}' )"
if [[ ! "$master_olares_version" ]]; then
echo "failed to fetch the version of Olares installed on master node"
exit 1
fi
PARAMS="--version $master_olares_version --base-dir $BASE_DIR"
CDN="--download-cdn-url ${cdn_url}"
if [[ -f $BASE_DIR/.prepared ]]; then
echo "file $BASE_DIR/.prepared detected, skip preparing phase"
echo ""
echo "please make sure the prepared Olares version is the same as the master, or there might be compatibility issues"
echo ""
else
echo "running system prechecks ..."
echo ""
$sh_c "$INSTALL_OLARES_CLI precheck $PARAMS"
if [[ $? -ne 0 ]]; then
exit 1
fi
echo "downloading installation wizard..."
echo ""
$sh_c "$INSTALL_OLARES_CLI download wizard $PARAMS $CDN"
if [[ $? -ne 0 ]]; then
echo "error: failed to download installation wizard"
exit 1
fi
echo "downloading installation packages..."
echo ""
$sh_c "$INSTALL_OLARES_CLI download component $PARAMS $CDN"
if [[ $? -ne 0 ]]; then
echo "error: failed to download installation packages"
exit 1
fi
echo "preparing installation environment..."
echo ""
# env 'REGISTRY_MIRRORS' is a docker image cache mirrors, separated by commas
if [ x"$REGISTRY_MIRRORS" != x"" ]; then
extra="--registry-mirrors $REGISTRY_MIRRORS"
fi
$sh_c "$INSTALL_OLARES_CLI prepare $PARAMS $extra"
if [[ $? -ne 0 ]]; then
echo "error: failed to prepare installation environment"
exit 1
fi
fi
if [ -f $BASE_DIR/.installed ]; then
echo "file $BASE_DIR/.installed detected, skip installing"
echo "if it is left by an unclean uninstallation, please manually remove it and invoke the installer again"
exit 0
fi
echo "installing Kubernetes and joining Olares cluster..."
echo ""
$sh_c "$INSTALL_OLARES_CLI node add $PARAMS $MASTER_SSH_OPTIONS"
if [[ $? -ne 0 ]]; then
echo "error: failed to install Olares"
exit 1
fi

View File

@@ -146,7 +146,7 @@ function get_app_key_secret(){
function get_app_settings(){
local username=$1
local apps=("vault" "desktop" "message" "wise" "search" "appstore" "notification" "dashboard" "settings" "devbox" "profile" "agent" "files")
local apps=("vault" "desktop" "message" "wise" "search" "appstore" "notification" "dashboard" "settings" "studio" "profile" "agent" "files")
for a in ${apps[@]};do
ks=($(get_app_key_secret "$username" "$a"))
echo '
@@ -282,6 +282,33 @@ function get_bfl_status(){
$sh_c "${KUBECTL} get pod -n user-space-${username} -l 'tier=bfl' -o jsonpath='{.items[*].status.phase}'"
}
function get_fileserver_status(){
$sh_c "${KUBECTL} get pod -n os-system -l 'app=files' -o jsonpath='{.items[*].status.phase}'"
}
function get_filefe_status(){
local username=$1
$sh_c "${KUBECTL} get pod -n user-space-${username} -l 'app=files' -o jsonpath='{.items[*].status.phase}'"
}
function check_fileserver(){
local status=$(get_fileserver_status)
local n=0
while [ "x${status}" != "xRunning" ]; do
n=$(expr $n + 1)
local dotn=$(($n % 10))
local dot=$(repeat $dotn '>')
echo -ne "\rWaiting for file-server starting ${dot}"
sleep 0.5
status=$(get_fileserver_status)
echo -ne "\rWaiting for file-server starting "
done
echo
}
function check_appservice(){
local status=$(get_appservice_status)
local n=0
@@ -300,6 +327,25 @@ function check_appservice(){
echo
}
function check_filesfe(){
local username=$1
local status=$(get_filefe_status ${username})
local n=0
while [ "x${status}" != "xRunning" ]; do
n=$(expr $n + 1)
local dotn=$(($n % 10))
local dot=$(repeat $dotn '>')
echo -ne "\rPlease waiting ${dot}"
sleep 0.5
status=$(get_filefe_status ${username})
echo -ne "\rPlease waiting "
done
echo
}
function check_bfl(){
local username=$1
local status=$(get_bfl_status ${username})
@@ -482,7 +528,7 @@ function upgrade_terminus(){
# patch
ensure_success $sh_c "${KUBECTL} apply -f ${BASE_DIR}/deploy/patch-globalrole-workspace-manager.yaml"
ensure_success $sh_c "$KUBECTL apply -f ${BASE_DIR}/deploy/patch-notification-manager.yaml"
# ensure_success $sh_c "$KUBECTL apply -f ${BASE_DIR}/deploy/patch-notification-manager.yaml"
# clear apps values.yaml
cat /dev/null > ${BASE_DIR}/wizard/config/apps/values.yaml
@@ -510,6 +556,13 @@ function upgrade_terminus(){
for appdir in "${BASE_DIR}/wizard/config/apps"/*/; do
if [ -d "$appdir" ]; then
releasename=$(basename "$appdir")
# ignore wizard
# FIXME: unintitialized user's wizard should be upgrade
if [ x"${releasename}" == x"wizard" ]; then
continue
fi
if [ "$user" != "$admin_user" ];then
releasename=${releasename}-${user}
fi
@@ -519,18 +572,6 @@ function upgrade_terminus(){
done
echo 'Waiting for Vault ...'
check_vault ${admin_user}
echo
echo 'Starting BFL ...'
check_bfl ${admin_user}
echo
echo 'Starting Desktop ...'
check_desktop ${admin_user}
echo
# upgrade app service in the last. keep app service online longer
local terminus_is_cloud_version=$($sh_c "${KUBECTL} get cm -n os-system backup-config -o jsonpath='{.data.terminus-is-cloud-version}'")
local backup_cluster_bucket=$($sh_c "${KUBECTL} get cm -n os-system backup-config -o jsonpath='{.data.backup-cluster-bucket}'")
@@ -544,18 +585,27 @@ function upgrade_terminus(){
--set backup.sync_secret=\"${backup_secret}\""
echo 'Waiting for App-Service ...'
sleep 2 # wait for controller reconiling
check_appservice
echo
# upgrade_ksapi ${users[@]}
# echo
echo 'Waiting for Vault ...'
check_vault ${admin_user}
echo
echo 'Starting BFL ...'
check_bfl ${admin_user}
echo
echo 'Starting files ...'
check_fileserver
check_filesfe ${admin_user}
echo
echo 'Starting Desktop ...'
check_desktop ${admin_user}
echo
local gpu=$($sh_c "${KUBECTL} get ds -n gpu-system orionx-server -o jsonpath='{.meta.name}'")
if [ "x$gpu" != "x" ]; then
echo "upgrade"
local GPU_DOMAIN=$($sh_c "${KUBECTL} get ds -n gpu-system orionx-server -o jsonpath='{.meta.annotations.gpu-server}'")
ensure_success $sh_c "${HELM} upgrade -i gpu ${BASE_DIR}/wizard/config/gpu -n gpu-system --set gpu.server=${GPU_DOMAIN} --reuse-values"
fi
}

View File

@@ -1,2 +1,2 @@
upgrade:
minVersion: 1.12.0-0000000
minVersion: 1.12.0-1

View File

@@ -7,14 +7,20 @@ metadata:
iam.kubesphere.io/uninitialized: "true"
helm.sh/resource-policy: keep
bytetrade.io/owner-role: platform-admin
bytetrade.io/terminus-name: {{.Values.user.terminus_name}}
bytetrade.io/terminus-name: "{{.Values.user.terminus_name}}"
bytetrade.io/launcher-auth-policy: two_factor
bytetrade.io/launcher-access-level: "1"
iam.kubesphere.io/sync-to-lldap: "true"
iam.kubesphere.io/synced-to-lldap: "false"
iam.kubesphere.io/user-provider: lldap
iam.kubesphere.io/globalrole: platform-admin
{{ if .Values.nat_gateway_ip }}
bytetrade.io/nat-gateway-ip: {{ .Values.nat_gateway_ip }}
{{ end }}
spec:
email: {{.Values.user.email}}
password: {{.Values.user.password}}
email: "{{.Values.user.email}}"
initialPassword: "{{ .Values.user.password }}"
groups:
- lldap_admin
status:
state: Active

View File

@@ -0,0 +1,18 @@
apiVersion: iam.kubesphere.io/v1alpha2
kind: Sync
metadata:
name: lldap
spec:
lldap:
name: ldap
url: "http://lldap-service.os-system:17170"
userBlacklist:
- admin
- terminus
groupWhitelist:
- lldap_admin
- lldap_regular
credentialsSecret:
kind: Secret
name: lldap-credentials
namespace: os-system

View File

@@ -33,6 +33,7 @@ rules:
resources:
- users
- configmaps
- secrets
verbs:
- get
@@ -61,6 +62,7 @@ rules:
- pods
- users
- configmaps
- secrets
verbs:
- get
- list

View File

@@ -1,18 +0,0 @@
apiVersion: iam.kubesphere.io/v1alpha2
kind: WorkspaceRoleBinding
metadata:
generation: 1
labels:
iam.kubesphere.io/user-ref: '{{.Values.user.name}}'
kubesphere.io/workspace: system-workspace
name: '{{.Values.user.name}}'
roleRef:
apiGroup: iam.kubesphere.io
kind: WorkspaceRole
name: system-workspace-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: '{{.Values.user.name}}'

View File

@@ -1,5 +1,3 @@
kubesphere:
redis_password: ""
backup:

View File

@@ -1,4 +1,4 @@
olaresd-v0.0.53.tar.gz,pkg/components,https://dc3p1870nn3cj.cloudfront.net/olaresd-v0.0.53-linux-amd64.tar.gz,https://dc3p1870nn3cj.cloudfront.net/olaresd-v0.0.53-linux-arm64.tar.gz,olaresd
olaresd-v1.12.0-rc.10.tar.gz,pkg/components,https://dc3p1870nn3cj.cloudfront.net/olaresd-v1.12.0-rc.10-linux-amd64.tar.gz,https://dc3p1870nn3cj.cloudfront.net/olaresd-v1.12.0-rc.10-linux-arm64.tar.gz,olaresd
socat-1.7.3.2.tar.gz,pkg/components,https://src.fedoraproject.org/lookaside/pkgs/socat/socat-1.7.3.2.tar.gz/sha512/540658b2a3d1b87673196282e5c62b97681bd0f1d1e4759ff9d72909d11060235ee9e9521a973603c1b00376436a9444248e5fbc0ffac65f8edb9c9bc28e7972/socat-1.7.3.2.tar.gz,https://src.fedoraproject.org/lookaside/pkgs/socat/socat-1.7.3.2.tar.gz/sha512/540658b2a3d1b87673196282e5c62b97681bd0f1d1e4759ff9d72909d11060235ee9e9521a973603c1b00376436a9444248e5fbc0ffac65f8edb9c9bc28e7972/socat-1.7.3.2.tar.gz,socat
conntrack-tools-1.4.1.tar.gz,pkg/components,https://github.com/fqrouter/conntrack-tools/archive/refs/tags/conntrack-tools-1.4.1.tar.gz,https://github.com/fqrouter/conntrack-tools/archive/refs/tags/conntrack-tools-1.4.1.tar.gz,conntrack-tools
minio.RELEASE.2023-05-04T21-44-30Z,pkg/components,https://dl.min.io/server/minio/release/linux-amd64/archive/minio.RELEASE.2023-05-04T21-44-30Z,https://dl.min.io/server/minio/release/linux-arm64/archive/minio.RELEASE.2023-05-04T21-44-30Z,minio
@@ -14,8 +14,11 @@ ubuntu2204_cuda-keyring_1.1-1_all.deb,pkg/components,https://developer.download.
ubuntu2204_cuda-keyring_1.0-1_all.deb,pkg/components,https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb,https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-keyring_1.0-1_all.deb,ubuntu-22.04_cuda-keyring_1.0-1
ubuntu2004_cuda-keyring_1.1-1_all.deb,pkg/components,https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.1-1_all.deb,https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/arm64/cuda-keyring_1.1-1_all.deb,ubuntu-20.04_cuda-keyring_1.1-1
ubuntu2004_cuda-keyring_1.0-1_all.deb,pkg/components,https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb,https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/arm64/cuda-keyring_1.0-1_all.deb,ubuntu-20.04_cuda-keyring_1.0-1
debian12_cuda-keyring_1.1-1_all.deb,pkg/components,https://developer.download.nvidia.com/compute/cuda/repos/debian12/x86_64/cuda-keyring_1.1-1_all.deb,https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-keyring_1.1-1_all.deb,debian-12_cuda-keyring_1.1-1
debian11_cuda-keyring_1.1-1_all.deb,pkg/components,https://developer.download.nvidia.com/compute/cuda/repos/debian11/x86_64/cuda-keyring_1.1-1_all.deb,https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-keyring_1.1-1_all.deb,debian-11_cuda-keyring_1.1-1
gpgkey,pkg/components,https://nvidia.github.io/libnvidia-container/gpgkey,https://nvidia.github.io/libnvidia-container/gpgkey,gpgkey
ubuntu_22.04_libnvidia-container.list,pkg/components,https://nvidia.github.io/libnvidia-container/ubuntu22.04/libnvidia-container.list,https://nvidia.github.io/libnvidia-container/ubuntu22.04/libnvidia-container.list,ubuntu_22.04_libnvidia-container.list
ubuntu_20.04_libnvidia-container.list,pkg/components,https://nvidia.github.io/libnvidia-container/ubuntu20.04/libnvidia-container.list,https://nvidia.github.io/libnvidia-container/ubuntu20.04/libnvidia-container.list,ubuntu_20.04_libnvidia-container.list
libnvidia-gpgkey,pkg/components,https://nvidia.github.io/libnvidia-container/gpgkey,https://nvidia.github.io/libnvidia-container/gpgkey,libnvidia-gpgkey
libnvidia-container.list,pkg/components,https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list,https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list,libnvidia-container.list
restic-linux-0.17.3,pkg/components,https://github.com/restic/restic/releases/download/v0.17.3/restic_0.17.3_linux_amd64.bz2,https://github.com/restic/restic/releases/download/v0.17.3/restic_0.17.3_linux_arm64.bz2,restic
restic-darwin-0.17.3,pkg/components,https://github.com/restic/restic/releases/download/v0.17.3/restic_0.17.3_darwin_amd64.bz2,https://github.com/restic/restic/releases/download/v0.17.3/restic_0.17.3_darwin_arm64.bz2,restic

View File

@@ -1,53 +0,0 @@
[components] format: url,filename
https://github.com/beclab/Installer/releases/download/0.1.13/terminus-cli-v0.1.13_linux_amd64.tar.gz,terminus-cli-v0.1.13_linux_amd64.tar.gz
https://src.fedoraproject.org/lookaside/pkgs/socat/socat-1.7.3.2.tar.gz/sha512/540658b2a3d1b87673196282e5c62b97681bd0f1d1e4759ff9d72909d11060235ee9e9521a973603c1b00376436a9444248e5fbc0ffac65f8edb9c9bc28e7972/socat-1.7.3.2.tar.gz,socat-1.7.3.2.tar.gz
https://github.com/fqrouter/conntrack-tools/archive/refs/tags/conntrack-tools-1.4.1.tar.gz,conntrack-tools-1.4.1.tar.gz
https://dl.min.io/server/minio/release/linux-amd64/archive/minio.RELEASE.2023-05-04T21-44-30Z,minio.RELEASE.2023-05-04T21-44-30Z
https://github.com/beclab/minio-operator/releases/download/v0.0.1/minio-operator-v0.0.1-linux-amd64.tar.gz,minio-operator-v0.0.1-linux-amd64.tar.gz
https://download.redis.io/releases/redis-5.0.14.tar.gz,redis-5.0.14.tar.gz
https://github.com/beclab/juicefs-ext/releases/download/v11.1.1/juicefs-v11.1.1-linux-amd64.tar.gz,juicefs-v11.1.1-linux-amd64.tar.gz
https://github.com/beclab/velero/releases/download/v1.11.3/velero-v1.11.3-linux-amd64.tar.gz,velero-v1.11.3-linux-amd64.tar.gz
https://launchpad.net/ubuntu/+source/apparmor/4.0.1-0ubuntu1/+build/28428840/+files/apparmor_4.0.1-0ubuntu1_amd64.deb,apparmor_4.0.1-0ubuntu1_amd64.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/cuda-keyring_1.1-1_all.deb,ubuntu_24.04_cuda-keyring_1.1-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/cuda-keyring_1.1-1_all.deb,ubuntu2404_cuda-keyring_1.1-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb,ubuntu_22.04_cuda-keyring_1.1-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb,ubuntu_22.04_cuda-keyring_1.0-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb,ubuntu2204_cuda-keyring_1.0-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.1-1_all.deb,ubuntu_20.04_cuda-keyring_1.1-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb,ubuntu_20.04_cuda-keyring_1.0-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb,ubuntu2004_cuda-keyring_1.0-1_all.deb
https://nvidia.github.io/libnvidia-container/gpgkey,gpgkey
https://nvidia.github.io/libnvidia-container/ubuntu22.04/libnvidia-container.list,ubuntu_22.04_libnvidia-container.list
https://nvidia.github.io/libnvidia-container/ubuntu20.04/libnvidia-container.list,ubuntu_20.04_libnvidia-container.list
[pkg] format: url,path,filename,special,cpname
https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz,cni/v0.9.1,,,
https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz,cni/v1.1.1,,,
https://github.com/containerd/containerd/releases/download/v1.6.4/containerd-1.6.4-linux-amd64.tar.gz,containerd/1.6.4,,,
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.0/crictl-v1.24.0-linux-amd64.tar.gz,crictl/v1.24.0,,,
https://github.com/coreos/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz,etcd/v3.4.13,,,
https://get.helm.sh/helm-v3.9.0-linux-amd64.tar.gz,helm/v3.9.0,,helm,helm-v3.9.0
https://github.com/k3s-io/k3s/releases/download/v1.21.5+k3s1/k3s,kube/v1.21.5,,,k3s-v1.21.5
https://storage.googleapis.com/kubernetes-release/release/v1.22.10/bin/linux/amd64/kubeadm,kube/v1.22.10,,kubeadm,kubeadm-v1.22.10
https://storage.googleapis.com/kubernetes-release/release/v1.22.10/bin/linux/amd64/kubelet,kube/v1.22.10,,kubelet,kubelet-v1.22.10
https://storage.googleapis.com/kubernetes-release/release/v1.22.10/bin/linux/amd64/kubectl,kube/v1.22.10,,kubectl,kubectl-v1.22.10
https://github.com/opencontainers/runc/releases/download/v1.1.1/runc.amd64,runc/v1.1.1,,,runc-v1.1.1
https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64,runc/v1.1.4,,,runc-v1.1.4

View File

@@ -1,53 +0,0 @@
[components] format: url,filename
https://github.com/beclab/Installer/releases/download/0.1.13/terminus-cli-v0.1.13_linux_amd64.tar.gz,terminus-cli-v0.1.13_linux_amd64.tar.gz
https://src.fedoraproject.org/lookaside/pkgs/socat/socat-1.7.3.2.tar.gz/sha512/540658b2a3d1b87673196282e5c62b97681bd0f1d1e4759ff9d72909d11060235ee9e9521a973603c1b00376436a9444248e5fbc0ffac65f8edb9c9bc28e7972/socat-1.7.3.2.tar.gz,socat-1.7.3.2.tar.gz
https://github.com/fqrouter/conntrack-tools/archive/refs/tags/conntrack-tools-1.4.1.tar.gz,conntrack-tools-1.4.1.tar.gz
https://dl.min.io/server/minio/release/linux-arm64/archive/minio.RELEASE.2023-05-04T21-44-30Z,
https://github.com/beclab/minio-operator/releases/download/v0.0.1/minio-operator-v0.0.1-linux-arm64.tar.gz,minio-operator-v0.0.1-linux-arm64.tar.gz
https://download.redis.io/releases/redis-5.0.14.tar.gz,redis-5.0.14.tar.gz
https://github.com/beclab/juicefs-ext/releases/download/v11.1.1/juicefs-v11.1.1-linux-arm64.tar.gz,juicefs-v11.1.1-linux-arm64.tar.gz
https://github.com/beclab/velero/releases/download/v1.11.3/velero-v1.11.3-linux-arm64.tar.gz,velero-v1.11.3-linux-arm64.tar.gz
https://launchpad.net/ubuntu/+source/apparmor/4.0.1-0ubuntu1/+build/28428841/+files/apparmor_4.0.1-0ubuntu1_arm64.deb,apparmor_4.0.1-0ubuntu1_arm64.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/arm64/cuda-keyring_1.1-1_all.deb,ubuntu_24.04_cuda-keyring_1.1-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/arm64/cuda-keyring_1.1-1_all.deb,ubuntu2404_cuda-keyring_1.1-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-keyring_1.1-1_all.deb,ubuntu_22.04_cuda-keyring_1.1-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-keyring_1.0-1_all.deb,ubuntu_22.04_cuda-keyring_1.0-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-keyring_1.0-1_all.deb,ubuntu2204_cuda-keyring_1.0-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/arm64/cuda-keyring_1.1-1_all.deb,ubuntu_20.04_cuda-keyring_1.1-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/arm64/cuda-keyring_1.0-1_all.deb,ubuntu_20.04_cuda-keyring_1.0-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/arm64/cuda-keyring_1.0-1_all.deb,ubuntu2004_cuda-keyring_1.0-1_all.deb
https://nvidia.github.io/libnvidia-container/gpgkey,gpgkey
https://nvidia.github.io/libnvidia-container/ubuntu22.04/libnvidia-container.list,ubuntu_22.04_libnvidia-container.list
https://nvidia.github.io/libnvidia-container/ubuntu20.04/libnvidia-container.list,ubuntu_20.04_libnvidia-container.list
[pkg] format: url,path,filename,special,cpname
https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-arm64-v0.9.1.tgz,cni/v0.9.1,,
https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz,cni/v1.1.1,,
https://github.com/containerd/containerd/releases/download/v1.6.4/containerd-1.6.4-linux-arm64.tar.gz,containerd/1.6.4,,
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.0/crictl-v1.24.0-linux-arm64.tar.gz,crictl/v1.24.0,,
https://github.com/coreos/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-arm64.tar.gz,etcd/v3.4.13,,
https://get.helm.sh/helm-v3.9.0-linux-arm64.tar.gz,helm/v3.9.0,,helm,helm-v3.9.0
https://github.com/k3s-io/k3s/releases/download/v1.21.5+k3s1/k3s-arm64,kube/v1.21.5,,,k3s-v1.21.5
https://storage.googleapis.com/kubernetes-release/release/v1.22.10/bin/linux/arm64/kubeadm,kube/v1.22.10,,kubeadm,kubeadm-v1.22.10
https://storage.googleapis.com/kubernetes-release/release/v1.22.10/bin/linux/arm64/kubelet,kube/v1.22.10,,kubelet,kubelet-v1.22.10
https://storage.googleapis.com/kubernetes-release/release/v1.22.10/bin/linux/arm64/kubectl,kube/v1.22.10,,kubectl,kubectl-v1.22.10
https://github.com/opencontainers/runc/releases/download/v1.1.1/runc.arm64,runc/v1.1.1,,,runc-v1.1.1
https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.arm64,runc/v1.1.4,,,runc-v1.1.4

View File

@@ -1,51 +1,24 @@
beclab/ks-apiserver:v3.3.0-ext-3
beclab/kube-state-metrics:v2.3.0-ext
beclab/notification-manager-ext:v0.1.1-ext
beclab/notification-manager-operator-ext:v0.1.0-ext
beclab/notification-tenant-sidecar:v0.1.0
calico/cni:v3.23.2
calico/cni:v3.27.3
calico/kube-controllers:v3.23.2
calico/kube-controllers:v3.27.3
calico/node:v3.23.2
calico/node:v3.27.3
calico/pod2daemon-flexvol:v3.23.2
beclab/ks-apiserver:0.0.11
beclab/ks-controller-manager:0.0.11
beclab/kube-state-metrics:v2.3.0-ext.1
calico/cni:v3.29.2
calico/kube-controllers:v3.29.2
calico/node:v3.29.2
beclab/citus:12.2
csiplugin/snapshot-controller:v4.0.0
beclab/ks-installer-ext:v0.1.9-ext
kubesphere/k8s-dns-node-cache:1.15.12
kubesphere/ks-console:v3.3.0
kubesphere/ks-controller-manager:v3.3.0
kubesphere/kube-apiserver:v1.22.10
kubesphere/kube-apiserver:v1.21.4
kubesphere/kube-controller-manager:v1.22.10
kubesphere/kube-controller-manager:v1.21.4
kubesphere/kubectl:v1.22.0
kubesphere/kube-proxy:v1.22.10
kubesphere/kube-proxy:v1.21.4
kubesphere/kube-rbac-proxy:v0.12.0
kubesphere/kube-rbac-proxy:v0.8.0
kubesphere/kube-scheduler:v1.22.10
kubesphere/kube-scheduler:v1.21.4
kubesphere/pause:3.5
kubesphere/pause:3.4.1
k8s.gcr.io/pause:3.5
k8s.gcr.io/pause:3.6
k8s.gcr.io/kube-scheduler:v1.22.10
k8s.gcr.io/kube-proxy:v1.22.10
k8s.gcr.io/kube-controller-manager:v1.22.10
k8s.gcr.io/kube-apiserver:v1.22.10
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4
registry.k8s.io/pause:3.5
bitnami/kube-rbac-proxy:0.19.0
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/pause:3.10
kubesphere/prometheus-config-reloader:v0.55.1
kubesphere/prometheus-operator:v0.55.1
mirrorgooglecontainers/defaultbackend-amd64:1.4
openebs/linux-utils:3.3.0
openebs/provisioner-localpv:3.3.0
beclab/percona-server-mongodb-operator:1.15.2
prom/alertmanager:v0.23.0
prom/node-exporter:v1.3.1
beclab/node-exporter:0.0.1
prom/prometheus:v2.34.0
quay.io/argoproj/argocli:v3.5.0
quay.io/argoproj/argoexec:v3.5.0
@@ -53,19 +26,22 @@ quay.io/argoproj/workflow-controller:v3.5.0
redis:5.0.14-alpine
beclab/velero:v1.11.3
beclab/velero-plugin-for-terminus:v1.0.2
beclab/l4-bfl-proxy:v0.2.7
beclab/l4-bfl-proxy:v0.3.0
gcr.io/k8s-minikube/storage-provisioner:v5
owncloudci/wait-for:latest
beclab/recommend-argotask:v0.0.12
nvcr.io/nvidia/k8s-device-plugin:v0.16.1
beclab/nvshare:libnvshare-v0.0.1
bytetrade/nvshare:nvshare-device-plugin
bytetrade/nvshare:nvshare-scheduler
beclab/nats-server-config-reloader:v1
beclab/cloudflared:v0.1.0
rancher/mirrored-library-busybox:1.34.1
rancher/mirrored-library-traefik:2.6.2
rancher/mirrored-metrics-server:v0.5.2
rancher/mirrored-pause:3.6
beclab/reverse-proxy:v0.1.4
beclab/upgrade-job:0.1.5
beclab/reverse-proxy:v0.1.8
beclab/upgrade-job:0.1.7
bytetrade/envoy:v1.25.11.1
liangjw/kube-webhook-certgen:v1.1.1
beclab/hami:v2.5.2
alpine:3.14
mirrorgooglecontainers/defaultbackend-amd64:1.4
projecthami/hami-webui-fe-oss:v1.0.5
projecthami/hami-webui-be-oss:v1.0.5
nvidia/dcgm-exporter:4.1.1-4.0.4-ubuntu22.04
ghcr.io/open-telemetry/opentelemetry-go-instrumentation/autoinstrumentation-go:v0.20.0
bytetrade/autoinstrumentation-apache-httpd:1.0.4-fix1
ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-nodejs:0.40.0

View File

@@ -1,7 +1,8 @@
kubesphere/pause:3.5
calico/cni:v3.23.2
calico/node:v3.23.2
kubesphere/kube-rbac-proxy:v0.11.0
registry.k8s.io/pause:3.10
calico/cni:v3.29.2
calico/kube-controllers:v3.29.2
calico/node:v3.29.2
bitnami/kube-rbac-proxy:0.19.0
prom/node-exporter:v1.3.1
beclab/image-service:0.2.12
beclab/osnode-init:v0.0.10

View File

@@ -1,12 +1,10 @@
cni-plugins-v0.9.1.tgz,pkg/cni/v0.9.1,https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz,https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-arm64-v0.9.1.tgz,cni-plugins-k3s
cni-plugins-v1.1.1.tgz,pkg/cni/v1.1.1,https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz,https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz,cni-plugins-k8s
containerd-1.6.4.tar.gz,pkg/containerd/1.6.4,https://github.com/containerd/containerd/releases/download/v1.6.4/containerd-1.6.4-linux-amd64.tar.gz,https://github.com/containerd/containerd/releases/download/v1.6.4/containerd-1.6.4-linux-arm64.tar.gz,containerd
crictl-v1.24.0-linux-amd64.tar.gz,pkg/crictl/v1.24.0,https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.0/crictl-v1.24.0-linux-amd64.tar.gz,https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.0/crictl-v1.24.0-linux-arm64.tar.gz,crictl
etcd-v3.4.13.tar.gz,pkg/etcd/v3.4.13,https://github.com/coreos/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz,https://github.com/coreos/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-arm64.tar.gz,etcd
helm-v3.9.0.tar.gz,pkg/helm/v3.9.0,https://get.helm.sh/helm-v3.9.0-linux-amd64.tar.gz,https://get.helm.sh/helm-v3.9.0-linux-arm64.tar.gz,helm
k3s,pkg/kube/v1.21.5,https://github.com/k3s-io/k3s/releases/download/v1.21.5+k3s1/k3s,https://github.com/k3s-io/k3s/releases/download/v1.21.5+k3s1/k3s-arm64,k3s
kubeadm,pkg/kube/v1.22.10,https://storage.googleapis.com/kubernetes-release/release/v1.22.10/bin/linux/amd64/kubeadm,https://storage.googleapis.com/kubernetes-release/release/v1.22.10/bin/linux/arm64/kubeadm,kubeadm
kubelet,pkg/kube/v1.22.10,https://storage.googleapis.com/kubernetes-release/release/v1.22.10/bin/linux/amd64/kubelet,https://storage.googleapis.com/kubernetes-release/release/v1.22.10/bin/linux/arm64/kubelet,kubelet
kubectl,pkg/kube/v1.22.10,https://storage.googleapis.com/kubernetes-release/release/v1.22.10/bin/linux/amd64/kubectl,https://storage.googleapis.com/kubernetes-release/release/v1.22.10/bin/linux/arm64/kubectl,kubectl
runc,pkg/runc/v1.1.1,https://github.com/opencontainers/runc/releases/download/v1.1.1/runc.amd64,https://github.com/opencontainers/runc/releases/download/v1.1.1/runc.arm64,runc-k3s
runc,pkg/runc/v1.1.4,https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64,https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.arm64,runc-k8s
cni-plugins-v1.6.2.tgz,pkg/cni/v1.6.2,https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz,https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-arm-v1.6.2.tgz,cni-plugins
containerd-1.6.36.tar.gz,pkg/containerd/1.6.36,https://github.com/containerd/containerd/releases/download/v1.6.36/containerd-1.6.36-linux-amd64.tar.gz,https://github.com/containerd/containerd/releases/download/v1.6.36/containerd-1.6.36-linux-arm64.tar.gz,containerd
crictl-v1.32.0.tar.gz,pkg/crictl/v1.32.0,https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.32.0/crictl-v1.32.0-linux-amd64.tar.gz,https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.32.0/crictl-v1.32.0-linux-arm64.tar.gz,crictl
etcd-v3.5.18.tar.gz,pkg/etcd/v3.5.18,https://github.com/coreos/etcd/releases/download/v3.5.18/etcd-v3.5.18-linux-amd64.tar.gz,https://github.com/coreos/etcd/releases/download/v3.5.18/etcd-v3.5.18-linux-arm64.tar.gz,etcd
helm-v3.9.0.tar.gz,pkg/helm/v3.9.0,https://get.helm.sh/helm-v3.17.1-linux-amd64.tar.gz,https://get.helm.sh/helm-v3.17.1-linux-arm.tar.gz,helm
k3s-v1.32.2,pkg/kube/v1.32.2,https://github.com/k3s-io/k3s/releases/download/v1.32.2+k3s1/k3s,https://github.com/k3s-io/k3s/releases/download/v1.32.2+k3s1/k3s-arm64,k3s
kubeadm-v1.32.2,pkg/kube/v1.32.2,https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm,https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubeadm,kubeadm
kubelet-v1.32.2,pkg/kube/v1.32.2,https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet,https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubelet,kubelet
kubectl-v1.32.2,pkg/kube/v1.32.2,https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl,https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubectl,kubectl
runc-v1.2.5,pkg/runc/v1.2.5,https://github.com/opencontainers/runc/releases/download/v1.2.5/runc.amd64,https://github.com/opencontainers/runc/releases/download/v1.2.5/runc.arm64,runc

View File

@@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

Some files were not shown because too many files have changed in this diff Show More