Compare commits

...

193 Commits

Author SHA1 Message Date
liuyu
687cb451af fix: wrong annotation configurations 2025-03-13 16:50:37 +08:00
liuyu
04df191825 fix: wrong annotation configurations 2025-03-13 16:48:57 +08:00
liuyu
ac34bf3857 olares: fix the opentelemetry annotations configuration bug 2025-03-13 15:53:40 +08:00
huaiyuan
e0a670628c desktop: request data when socket err or network offline (#1070) 2025-03-12 23:27:23 +08:00
aby913
7ced9702df feat(installer): support data backup, restore in olares-cli (#1069) 2025-03-12 23:26:58 +08:00
eball
09cb6075ad olares: use the pod locahost address as the infisical server address to the infisical sidecar (#1068)
Co-authored-by: liuyu <>
2025-03-12 23:26:19 +08:00
hysyeah
d8ba35adbe tapr,bfl:add tapr-image-role secrets permission;fix create user cpu check (#1066) 2025-03-12 21:24:01 +08:00
eball
da469f4f27 tapr: add missing fields of db table organizations in Infisical sidecar (#1064)
Co-authored-by: liuyu <>
2025-03-12 21:04:15 +08:00
hysyeah
d7265418cd fix: change ks image tag (#1061) 2025-03-12 20:14:06 +08:00
salt
0f12d4e5df fix: optimize google,dropbox direct upload (#1060)
Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
2025-03-12 20:12:32 +08:00
wiy
f3a76a229f feat(files): update files support google drive & dropbox (#1057) 2025-03-12 15:40:49 +08:00
dkeven
6bc4ec410a fix: add the missing kubernetes image (#1056) 2025-03-12 15:38:38 +08:00
dkeven
cad586985f feat(installer): support swap and zram configurations (#1055) 2025-03-12 14:45:51 +08:00
berg
6f1b1c667a market: reconnect socket and reinitialize data on app return (#1053)
feat: market release v0.3.6 version
2025-03-12 00:03:19 +08:00
lovehunter9
d334a537d1 style: files-server project structure reconstruction (#1051) 2025-03-12 00:02:22 +08:00
hysyeah
744edb7969 fix: add node shell image to pre download (#1050) 2025-03-12 00:01:08 +08:00
eball
3e506527a2 tapr: move infisical secret service to os-system as a singleton instance (#1047)
* tapr: move infisical secret service to os-system as a singleton instance

* fix: middleware configuration

* fix: cluster role bug

---------

Co-authored-by: liuyu <>
2025-03-11 00:28:56 +08:00
hysyeah
58a9264fab app-service: change hostpath with type DirectoryOrCreate owner to 1000 by inject init container (#1046) 2025-03-10 22:19:55 +08:00
yyh
a36ecdddc9 control-hub: fix terminal route path conflict (#1045)
fix(control-hub): fix terminal route path conflict
2025-03-10 21:06:21 +08:00
eball
9b5aa0e550 olares: add opentelemery to cluster to trace the services of cluster (#1042)
* feat: add opentelemetry operator to cluster

* feat: add instrumentation injecting

* fix: add webhook test pod

* fix: update helm hook to install webhook priority

* fix: update priority

* fix: post install otel webhook

* fix: collector bug & post install to wait operator running

* fix: alpine 3.3 has not arm64 version

---------

Co-authored-by: liuyu <>
2025-03-09 21:29:15 +08:00
hysyeah
4567cc4cfe olares: fix special leading char cause helm render error (#1040) 2025-03-07 00:34:37 +08:00
berg
3b49853bd4 wise, knowledge: add reading progress function and fix some bugs (#1039)
feat: update wise and knowledge version
2025-03-07 00:34:11 +08:00
huaiyuan
ad37446fc1 desktop: launch display different icons on different devices (#1037) 2025-03-06 15:49:54 +08:00
dkeven
01644ec8b3 feat: use HAMi with nvshare as GPU plugin (#1033) 2025-03-06 15:47:53 +08:00
wiy
492e56becb files: update files new version to 1.3.39 (#1029)
* fix: seafile remove recv file log for uploading more stable

* fix: upload retry error & sync upload refresh files

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
2025-03-05 23:57:40 +08:00
yyh
0e9d57051f feat(control-hub & ks): add node terminal (#1028)
* feat(control-hub): add node terminal

* feat: handle node default shell to bash

---------

Co-authored-by: hys <hysyeah@gmail.com>
2025-03-05 23:57:18 +08:00
huaiyuan
a90ab98631 fix: update @bytetrade/core to 0.2.53 (#1026) 2025-03-05 23:56:08 +08:00
eball
d1232f37c3 fix: increase ingress client body buffer size (#1023) 2025-03-05 23:54:41 +08:00
dkeven
9e9267b4b0 fix(bfl): fetch current user object before every configure operation (#1021) 2025-03-05 23:54:02 +08:00
berg
55bcb45ab2 wise, file: update files & wise new version to 1.3.38 (#1019)
* fix: files changed to feed drive_server 0.0.50 and cache using newest version, uploader offset judging changed for SMB 499 and improve uploading speed

* feat: update files & wise new version

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
Co-authored-by: qq815776412 <815776412@qq.com>
2025-03-04 23:59:54 +08:00
dkeven
710491d8ed feat: upgrade k8s to 1.32 (#1014) 2025-03-04 20:48:09 +08:00
huaiyuan
323dc52e59 login&desktop: open a new tab when on mobile and tablet devices (#1015)
login&desktop: open the app in a new tab when on mobile and tablet devices
2025-03-04 00:05:53 +08:00
dkeven
c02910400e feat(bfl): add watcher to apply reverse proxy (#1013) 2025-03-04 00:05:17 +08:00
eball
0e25eb1d8b olaresd: remove smb mounting blocksize option to use the default value (#1011) 2025-03-04 00:04:29 +08:00
hysyeah
ee1e2abed0 app-service: fix envoy outbound port (#1010) 2025-03-04 00:04:06 +08:00
aby913
ea24c1a33c ci: build restic (#1001) 2025-03-03 21:23:02 +08:00
simon
c993d936be knowledge&download: update knowledge to v0.1.64, download-spider to v0.0.19 (#1007)
knowledge v0.1.64
2025-03-03 12:07:52 +08:00
salt
7ba5b5628a feat: add id-route for file info, fix file size limit when direct upload (#1005)
Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
2025-03-03 11:07:13 +08:00
huaiyuan
94181ab9db login&desktop: update desktop dock logic and optimize mobile device (#1002)
login&desktop: update update desktop dock logic and optimize mobile device
2025-02-28 23:55:11 +08:00
hysyeah
9f2f390b5a app-service: custom allowed outbound port;tcp udp port (#997)
* app-service: custom allowed outbound port;tcp udp port

* fix: add idle timeout to original_dst cluster

---------

Co-authored-by: liuyu <>
2025-02-27 23:59:46 +08:00
Calvin W.
c514ecec20 docs: fix bad link in readme (#996) 2025-02-27 00:07:51 +08:00
hysyeah
1fcbd0b790 app-service: fix app installation can not be canceled after reboot (#993) 2025-02-26 00:33:31 +08:00
salt
5bb3143f57 feat: cloud drive async upload rename (#992)
Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
2025-02-26 00:33:05 +08:00
eball
b368735e27 bfl-ingress: increase keepalive requests of ingress (#990) 2025-02-26 00:31:57 +08:00
huaiyuan
e7792c272e files&files server: add support for google drive and dropbox (#989)
* feat: files add support for google drive and dropbox

* fix(files): update google drive and dropbox

* limit version for appdata-backend

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
2025-02-25 13:13:50 +08:00
huaiyuan
f622bec74f desktop: update highlight txt in search (#988) 2025-02-24 23:33:54 +08:00
hysyeah
cc3d8faabf tapr: fix create stream return nil value (#985) 2025-02-24 23:32:34 +08:00
salt
2ec8abe45c fix: fix async upload from terminus to dropbox file size error (#984)
Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
2025-02-24 23:32:09 +08:00
salt
97e67e4e28 feat: optimization search3 (#981)
* feat: optimization search3

* feat: desktop-server change for search3 merge result

---------

Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
2025-02-24 18:50:33 +08:00
simon
ce5120008d knowledge: update knowledge to v0.1.63 (#980)
knowledge v0.1.63
2025-02-21 23:56:20 +08:00
yyh
80003178bf fix(desktop): disable PWA in safari on the desktop (#979) 2025-02-21 23:55:53 +08:00
hysyeah
946598e731 tapr, system-server: fix auth token validate (#977) 2025-02-21 23:54:52 +08:00
berg
e311ab4f72 market: allow paused apps to update (#975)
feat: update market to v0.3.5
2025-02-21 23:53:46 +08:00
simon
678645a243 download&download: update knowledge to v0.1.62, yt-dlp to v0.0.20 (#973)
knowledge update
2025-02-20 23:28:07 +08:00
hysyeah
61344115f2 app-service,kubesphere: get best cnd server in upgrade job; change kubectl image tag (#972)
* app-service,kubesphere: get best cnd server in upgrade job; change kubectl image tag

* Update images

* Update appservice_deploy.yaml

---------

Co-authored-by: eball <liuy102@hotmail.com>
2025-02-20 23:27:35 +08:00
eball
c227e9ba21 olaresd: optimize smb mount options & add api for oic (#969) 2025-02-20 17:11:52 +08:00
simon
e98c276bf0 download&backend server: update download-spider to v0.0.17, backend to v0.0.26 (#967)
add twitter ,zhihu extract
2025-02-20 00:39:49 +08:00
huaiyuan
4d4f8999d0 larepass&files&files server: update LarePass version to v1.3.31 (#965)
* fix: sync recursive pasting with eacape

* fix(files): block slashes when creating/renaming and update notify msg

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
2025-02-20 00:39:18 +08:00
hysyeah
e1ad84bca5 kubesphere, bfl, authelia, app-service, system-server, installer: ks remove unused code;support lldap auth (#959)
* feat: ks remove unused code;support lldap auth

* fix: update monitoring server

* fix: update cli version
2025-02-20 00:38:36 +08:00
huaiyuan
9587345155 larepass&files&files server: update LarePass version to v1.3.30 (#964)
* fix: pasting to sync with special characters

* fix(files): prompt message when a backslash appears in sync

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
2025-02-18 23:52:10 +08:00
eball
14400a559e files: make the files server running as root (#960) 2025-02-18 23:50:27 +08:00
huaiyuan
65211ba044 larePass&files&files server: update LarePass version to v1.3.29 (#957)
* fix: deal with special characters for dirve/cache/sync, fix uploading process lost problem at restarting for uploader

* fix(files): fix bug of special character error in file name

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
2025-02-18 00:18:21 +08:00
huaiyuan
c4516d19c7 login: display login content on Safari browser (#955)
fix: display login content on Safari browser
2025-02-17 23:51:35 +08:00
yyh
4064ccf393 fix(desktop): fix: fix resource cache in safari browser and some ui bug (#954) 2025-02-17 23:51:01 +08:00
berg
74377bd655 settings: hide user email entry (#952)
feat: update settings v0.2.11
2025-02-17 22:19:41 +08:00
eball
ac33371b57 bfl: increase l4 proxy nginx worker process number to half of cpu cores (#949)
bfl: increase nginx worker process to half of cpu cores
2025-02-17 22:04:26 +08:00
salt
4617d8828a feat: fix knowen dropbox, googledrive problem (#948)
feat:fix knowen dropbox, googledrive problem

Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
2025-02-17 10:55:37 +08:00
hysyeah
c117ea6c8f app-service: change user space network policy for ipblock (#946)
fix: change user space network policy for ipblock
2025-02-13 23:42:41 +08:00
hysyeah
c290145ea8 app-service: continue to resume op after restart; envoy inbound tcp proxy (#943)
* app-service: continue to resume op after restart; envoy inbound tcp proxy

* ci: fix upload script bug

---------

Co-authored-by: liuyu <>
2025-02-12 22:51:28 +08:00
dkeven
e56978b164 fix(installer): restart coredns when change ip, raise cri timeout (#941) 2025-02-12 01:12:09 +08:00
eball
afc83d5c85 tapr: add node affinity to citus and kvrocks (#939)
Co-authored-by: liuyu <>
2025-02-11 13:44:33 +08:00
eball
9f324692bd olares: upload the original file with md5 as a backup (#938)
* olares: upload original file with md5 as a backup

* olares: upload original file with md5 as a backup

---------

Co-authored-by: liuyu <>
2025-02-10 20:28:41 +08:00
liuyu
bb471ba463 suspend daily build 2025-01-31 09:59:41 +08:00
eball
b08174353a olares: remove some debug code (#935)
fix: remove some debug codes

Co-authored-by: liuyu <>
2025-01-24 13:41:05 +08:00
eball
60bedc6c46 app-service: remove app cache path on the hosts directly (#936)
* app-service: remove app cache path on the hosts directly

* Update appservice_deploy.yaml
2025-01-24 11:05:07 +08:00
huaiyuan
98984ead44 files: delete notify id in notifyHide (#932)
fix: delete notify id in notifyHide
2025-01-23 23:01:13 +08:00
eball
a578148d5e olaresd: allow mounting an external device to ai path (#929)
olaresd: allow mounting a external device to ai path
2025-01-23 20:23:34 +08:00
eball
35c2072d9c app-service: inject nvshare environment duplicately (#927) 2025-01-23 20:23:01 +08:00
huaiyuan
9b57981490 files&files server: update LarePass version to v1.3.25 (#925)
* uploader v1.0.9 to make final stage of uploading big file invisiable; increase files nginx worker to auto and increase timeout of files nginx and envoy and seafile nginx

* files: notify each operation when pasting

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
2025-01-23 20:21:52 +08:00
aby913
45d32ef568 fix(installer): prompt for the installation location and setup host ip as nat gateway ip for oic (#923) 2025-01-23 20:11:47 +08:00
huaiyuan
01d259870a files&files server: updage LarePass version to v1.3.24 (#919)
* fix: files nginx increase worker and timeout, and pasting temp file invisiable

* fix: fix create new folder in sync and update nginx timeout

* fix: increase the ingress read timeout

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
Co-authored-by: liuyu <>
2025-01-22 21:33:32 +08:00
0x7fffff92
e94c3acf25 fix: let tailscale follow headscale restart (#917)
Co-authored-by: 0x7fffff92 <0x7fffff92@example.com>
2025-01-22 16:58:39 +08:00
aby913
d95c577789 fix(installer): wsl hangs on update (#916) 2025-01-22 15:33:44 +08:00
simon
f72e4b903c knowledge: update version to v0.1.61 (#908)
knowledge
2025-01-22 14:03:16 +08:00
aby913
2c57b6f35a ci: build wsl-msi script fix (#907)
ci: build script fix
2025-01-21 23:31:24 +08:00
yyh
00c44e2797 fix(control-hub): fix pod status sync after delete replicas (#912) 2025-01-21 22:22:52 +08:00
huaiyuan
9fa30c9034 files&files server: disable nats and expand upload size limit to 100G (#909)
* fix: disable nats and expand upload size limit to 100G

* fix: files disable socket and expand upload size limit to 100G

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
2025-01-21 22:22:39 +08:00
aby913
764547abda ci: add build-wsl-package workflow (#901) 2025-01-21 20:55:07 +08:00
huaiyuan
f08b03863d files&files server: update larepass version to v1.3.20 (#905)
* fix: files immediately send events for remove/rename and folder create

* fix: fix files uplaodModal count err and filter md5

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
2025-01-21 19:48:37 +08:00
eball
1a2f45760a olaresd: mounting usb device compatibles with ata bridge (#903) 2025-01-21 19:06:23 +08:00
aby913
ab596896c7 ci: upload wsl2 installation package (#895)
ci: upload wsl-install-msi
2025-01-21 01:33:46 +08:00
simon
4e13cc2f9e download: update yt-dlp download version to v0.0.19 (#900)
yt-dlp
2025-01-21 01:33:15 +08:00
huaiyuan
d17514e94a files&settings&market&files server: update version larepass to v1.3.19 (#898)
fix: files-server memory explode bug by deleting md5 and buffering io.Copy
2025-01-20 23:42:24 +08:00
eball
dcaa0e7755 installer: install cifs-utils for mounting smb path (#893)
fix: install cifs-utils for mounting smb path

Co-authored-by: liuyu <>
2025-01-20 17:08:51 +08:00
hysyeah
1c9dfc702f app-service: support network visit from windows app (#891) 2025-01-20 00:38:15 +08:00
huaiyuan
1977c12c16 files, appdata-gateway,uploader: smb support, md5 function, cache preview and fix a pvc problem (#889)
* files, appdata-gateway and uploader: smb support, md5 function, cache preview and fix a pvc problem

* files, appdata-gateway and uploader: smb support, md5 function, cache preview and fix a pvc problem

* feat: mount smb share file & connect wifi via ble

* Merge branch 'smb_md5_history' of github.com:beclab/olares into smb_md5_history

# Conflicts:
#	apps/files/config/cluster/deploy/files_deploy.yaml

* files: external add smb server and files can view MD5

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
Co-authored-by: hysyeah <hysyeah@gmail.com>
Co-authored-by: liuyu <>
2025-01-18 00:54:41 +08:00
dkeven
4c69c7df7f fix(installer): modified some commands to compatible running In the container (#888) 2025-01-17 22:42:22 +08:00
hysyeah
bd591d106f app-serivce: inject nvshare-debug env (#886) 2025-01-17 21:35:26 +08:00
dkeven
d5ca9826e8 fix(installer): issues in wsl downloading/ssh sudo/containerd install (#884) 2025-01-17 21:30:53 +08:00
Calvin W.
eb1f35f934 docs: update the latest arch diagram (#883) 2025-01-17 19:10:53 +08:00
Calvin W
3007354c76 update the latest version 2025-01-17 13:39:07 +08:00
Calvin W
62a3152574 docs: update the latest arch diagram 2025-01-16 19:21:50 +08:00
eball
f785c89999 olares,bfl: update critical pods priority class (#879)
olares: update critical pods priority class

Co-authored-by: liuyu <>
2025-01-16 16:54:45 +08:00
berg
b502dfc1ef settings, dashboard: restore settings app entrance status notification and dashboard websocket (#876)
* fix: fix dashboard and settings websocket and update application entrance status

* fix: move dashboard ws nignx proxy
2025-01-16 00:16:01 +08:00
eball
baae5a5632 bfl: fix headscale acl api path parameters (#874) 2025-01-16 00:15:31 +08:00
dkeven
5c9a6dfa87 fix(installer): dont wipe juicefs when uninstalling worker (#873) 2025-01-15 21:34:30 +08:00
Calvin W.
86fcaf16c0 docs: remove comparison table and update arch diagram in readme (#871)
* docs: remove comparison table and update arch diagram

* Apply suggestions from code review

Co-authored-by: Yajing <110797546+fnalways@users.noreply.github.com>

---------

Co-authored-by: Yajing <110797546+fnalways@users.noreply.github.com>
2025-01-15 21:33:32 +08:00
berg
3225626ad9 bfl, settings, app-service: add ports and tailscale acl (#870)
* app-service,bfl: app ports acl api

* feat: update settings frontend and settings server

---------

Co-authored-by: hys <hysyeah@gmail.com>
2025-01-15 00:18:18 +08:00
dkeven
7ce7f0febe feat: add node to a cluster (#868) 2025-01-14 21:52:28 +08:00
dkeven
0eebaf7ddf feat(installer): add env var to explicitly specify public access (#866) 2025-01-14 21:22:02 +08:00
0x7fffff92
5947cfe42f fix(headscale): use postgres instead of sqlite for headscale rollingupdate (#865)
fix: use postgres instead of sqlite for headscale rollingupdate

Co-authored-by: 0x7fffff92 <0x7fffff92@example.com>
2025-01-14 21:21:41 +08:00
berg
e0050837ad wise: fix some bugs and update the version to be consistent with olares 1.11 (#858)
feat: update wise version
2025-01-13 22:22:58 +08:00
aby913
61eeb2094f fix(installer): windows user home path (#862) 2025-01-13 22:08:00 +08:00
dkeven
f9546d61ac fix(installer): fix multiple network-related bugs (#859) 2025-01-13 19:47:36 +08:00
dkeven
b044d6ece1 feat(installer): check systemd-resolved and config resolv.conf (#856) 2025-01-10 22:08:49 +08:00
hysyeah
ec416d0206 app-service: delete cache dir when cancel installation;set nvshare env (#855) 2025-01-10 21:18:51 +08:00
dkeven
1c114a4d80 feat(installer): check the validity of resolv.conf before installation (#851) 2025-01-10 16:12:38 +08:00
berg
fddd30916f market, bfl, app-service: added dependency checking mechanism and fixed some bugs (#849)
* feat: added dependency checking for the application and fixed some bugs

* app-service: add mandatory dep check; dequeue when app is initialized

---------

Co-authored-by: hys <hysyeah@gmail.com>
2025-01-09 23:52:49 +08:00
dkeven
5c8af06143 feat(installer): support enabling GPU on Debian & Ubuntu24 (#846) 2025-01-09 23:48:35 +08:00
dkeven
f8885ea3db fix(installer): run cuda lib script for WSL, disable uninstall cmd for WSL (#844) 2025-01-08 19:43:50 +08:00
eball
0cdcfcfb7f auth: redirect to login portal following the request of local domain (#841)
fix: redirect to login portal following the request of local domain
2025-01-08 14:45:45 +08:00
dkeven
ae78500731 fix(installer): use a global supported cuda version list (#842) 2025-01-08 14:44:00 +08:00
huaiyuan
71c24d7592 feat(Files&Vault&Wise&Files server): update LarePass new version to v1.3.14 (#836)
* feat: files server send message to frontend with nats when directory changed

* feat: update vault nats

* fix: files-frontend to vault

* feat: files frontend update data when the socket sended and add FilesDialog component

* Update files_deploy.yaml

* fix: vault server yaml

* fix: middleware operator nats mr list

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
Co-authored-by: qq815776412 <815776412@qq.com>
Co-authored-by: eball <liuy102@hotmail.com>
Co-authored-by: liuyu <>
Co-authored-by: hys <hysyeah@gmail.com>
2025-01-08 14:42:01 +08:00
dkeven
c53444b7c7 fix(installer): unify cuda support check in different tasks (#840) 2025-01-08 11:27:05 +08:00
dkeven
cd8498f3a6 fix(installer): multiple GPU-related bugs (#833) 2025-01-07 22:17:18 +08:00
hysyeah
a0e3cd7d8f image-service: fix remove custom mirror connection check;only proxy docker.io (#834) 2025-01-07 22:05:07 +08:00
aby913
a89ad94cfa fix(installer): check if PowerShell is running as an administrator (#832)
no message
2025-01-07 20:38:28 +08:00
dkeven
b20031bd17 fix(installer): invalid gpu node label value, run task without runner (#831) 2025-01-07 15:07:46 +08:00
dkeven
2c91b10136 fix(installer): properly check cuda driver & gpu plugin (#830) 2025-01-07 12:11:00 +08:00
dkeven
96a7579322 feat(installer): add gpu commands (#826)
* feat: add node selector

* feat(installer): install gpu driver & plugin by default

* fix: label bug

* fix: update installer

---------

Co-authored-by: liuyu <>
2025-01-06 23:06:11 +08:00
simon
aae7a4c21d wise: fix nginx configuration and database migration bugs (#827)
knowledge
2025-01-06 21:26:06 +08:00
aby913
2f76f98b69 fix(installer): install olares-cli.exe to the Windows global path (#823)
fix(installer): install olares-cli.exe to the Windows application directory for global access to olares-cli.exe
2025-01-06 20:13:40 +08:00
yyh
13128d2a16 fix(controlhub&dashboard): fix dashboard analytics multiple entrances and controlhub ui (#825)
fix: fix dashboard analytics multiple entrances and controlhub ui
2025-01-06 19:07:56 +08:00
simon
f9a281e789 knowledge and download: add filter and fix download bugs (#822)
knowledge v0.1.59
2025-01-04 19:53:53 +08:00
berg
78fda8a830 wise: updates upload and download functionality (#821)
feat: wise updates upload and download functionality
2025-01-04 02:26:27 +08:00
hysyeah
f7a254b82f app-service: fix api apps missing initializing state (#820) 2025-01-04 02:26:04 +08:00
wiy
cefcdd2690 revert(files-frontend): back files-frontend to files_fe_deploy (#819)
* feat: move files-frontend to system-frontend

* feat: set files-service to files1-service

* fix: files service and secret

* fix: update files-service to files-fe-service

* fix: files-fe-frontend build error

* fix: use tab error

* fix: files.conf error

* fix: files.conf server error

* revert: files_frontend and system-frontend

---------

Co-authored-by: liuyu <>
2025-01-04 02:25:41 +08:00
hysyeah
ad08b09463 app-service: add tailscale acls support for OlaresManifest.yaml (#817) 2025-01-02 23:46:33 +08:00
aby913
b00c93b85c feat(installer): add firewall settings for Windows (#816) 2025-01-02 23:45:40 +08:00
0x7fffff92
08cafd2fb5 feat(headscale): move acl.json to configmap (#815)
* feat: add acl to allow ssh for tailscale

* feat: acl using configmap

* chore: using RollingUpdate for headscale

* chore: add default acl.json configmap

---------

Co-authored-by: 0x7fffff92 <0x7fffff92@example.com>
2025-01-02 23:45:02 +08:00
wiy
703065750d feat(system-frontend): move files-frontend to system-frontend (#814)
* feat: move files-frontend to system-frontend

* feat: set files-service to files1-service

* fix: files service and secret

* fix: update files-service to files-fe-service

* fix: files-fe-frontend build error

* fix: use tab error

* fix: files.conf error

* fix: files.conf server error

---------

Co-authored-by: liuyu <>
2025-01-02 23:44:11 +08:00
salt
e71ec8d570 feat: recommend optimization (#813)
* feat: recommend optimization

* feat: recommend optimization, frontend part show debug info

---------

Co-authored-by: Ubuntu <ubuntu@ip-172-31-39-127.cluster.local>
2024-12-31 21:13:39 +08:00
fnalways
6932ab655a docs: update wording to clear confusion (#809) 2024-12-27 18:17:19 +08:00
Calvin W
351b0ee938 docs: update wording to clear confusion 2024-12-27 17:50:55 +08:00
hysyeah
f047051140 app-service: fix app suspend in os-system;image download bug (#807) 2024-12-27 15:43:50 +08:00
Ikko Eltociear Ashimine
d9b7b7549c docs: add Japanese README (#806)
I created Japanese translated README.
2024-12-27 14:43:18 +08:00
dkeven
3afd510477 feat(installer): add a separate command for all prechecks (#802)
feat: add a separate command for all prechecks
2024-12-26 20:20:45 +08:00
eball
721b3dad44 olaresd: ignore unknown graphics card (#801) 2024-12-26 20:13:20 +08:00
yyh
6b8a26231a fix(system-frontend): fix app bugs and update some ui (#798) 2024-12-26 11:45:32 +08:00
berg
e1a15039f2 wise, vault, file: fix some ui bugs (#796)
fix: fix some wise, vault, file ui bugs
2024-12-25 00:10:36 +08:00
dkeven
8dcebeeea2 fix(installer): tag k8s images in minikube to avoid destructive reload (#795) 2024-12-24 15:12:44 +08:00
hysyeah
babd97802e app-serivce: fix patch deploy/sts cause pod restart (#794) 2024-12-24 00:01:28 +08:00
berg
49e7006373 wise, vault, file: Optimize the loading speed of the reading detail page. (#791)
feat: Optimize the loading speed of the reading detail page.
2024-12-23 23:59:44 +08:00
dkeven
6e9143bbb9 fix(installer): reset config path of cri plugin in minikube (#790) 2024-12-23 21:10:52 +08:00
dkeven
5f34fa5049 feat(installer): seperate phase & command for storage installation (#789) 2024-12-23 16:48:10 +08:00
eball
2028656a6a olares: typo in nvshare scheduler yaml (#788) 2024-12-23 14:35:42 +08:00
eball
bca084d8f5 olares: fix nvshare files be conflicting with dir (#787)
Co-authored-by: liuyu <>
2024-12-23 11:29:15 +08:00
aby913
dd201f0b89 tapr, knowledge, system-fronted: fix adjust knowledge websocket proxy (#785) 2024-12-21 00:02:39 +08:00
aby913
b45c88ee82 installer: feat get cuda version (#784) 2024-12-21 00:01:57 +08:00
huaiyuan
7b40e65315 files/vault/wise: upgrade larepass version to v1.3.6 (#782)
fix: upgrade larepass version to v1.3.6
2024-12-20 22:13:11 +08:00
huaiyuan
83ca9667f9 style(login&desktop): optimize Login and Desktop ui (#780) 2024-12-20 22:02:19 +08:00
yyh
0f8c074033 style(dashboard&controlhub): optimize dashboard and controlhub styling (#778) 2024-12-20 21:35:07 +08:00
dkeven
51427d6b73 feat(installer): support setting registry mirrors for minikube (#777) 2024-12-20 20:17:13 +08:00
hysyeah
0fe1c04031 app-service: set gpu values (#774) 2024-12-20 20:15:40 +08:00
hysyeah
3e36703327 olares: add init container for nats to generate nats.conf (#773) 2024-12-20 20:14:10 +08:00
eball
f89fb7fd28 olaresd: get default gateway interface ip (#772) 2024-12-19 23:46:24 +08:00
Calvin W.
929ef45cdc docs: fix video link in readme (#770) 2024-12-19 23:45:49 +08:00
berg
dc35515102 setting, profile: replace common component and fix ui details (#768)
fix: update q-toggle component and ui details
2024-12-19 21:26:15 +08:00
aby913
ec2eb83a11 installer: feat support pve lxc (#767)
installer: support pve lxc
2024-12-19 15:01:14 +08:00
Sai
e9edf5e45f market: fix app info inconsistency (#766)
fix app info inconsistency
2024-12-19 11:29:04 +08:00
eball
3063232632 olaresd: watching the ip-changing log modified (#764) 2024-12-18 21:22:54 +08:00
Calvin W.
4f6fa4a3f3 docs: update Ubuntu support version (#763) 2024-12-18 20:50:44 +08:00
Calvin W
b6388980a0 update wording and version info 2024-12-18 19:55:57 +08:00
Calvin W
89a667e2b6 update other support versions 2024-12-18 17:59:35 +08:00
Calvin W
31aab6c3ae docs: update Ubuntu support version 2024-12-18 17:48:03 +08:00
Calvin W.
969cd76ac5 docs: reposition Olares as sovereign cloud OS for local AI (#762)
* docs: reposition Olares as sovereign cloud OS for local AI

* update title

* update benefits wording

* Apply suggestions from code review

Co-authored-by: fnalways <110797546+fnalways@users.noreply.github.com>

* Update README_CN.md

Co-authored-by: fnalways <110797546+fnalways@users.noreply.github.com>

* adjust wording for CN

* restructure readme to make it more intuitive and accessible

---------

Co-authored-by: fnalways <110797546+fnalways@users.noreply.github.com>
2024-12-18 17:14:30 +08:00
wiy
f14dc7398c wizard: approve dns check (#761)
feat: update wizard version to v0.5.12
2024-12-18 11:11:36 +08:00
eball
bc615b8a24 olaresd: compatible with glibc 2.31 (#758) 2024-12-17 21:05:29 +08:00
dkeven
dbbe1419cd ci: use stable runner ubuntu-22.04 rather than latest (#756) 2024-12-17 17:49:33 +08:00
dkeven
454401e64f fix(installer): skip conflicting containerd precheck on cloud instance (#757) 2024-12-17 17:16:05 +08:00
dkeven
b62301c38c fix(installer): add precheck for conflicting containerd and ports (#754)
* fix(installer): ensure no containerd already exists before preparing

* ci: remove useless step

---------

Co-authored-by: liuyu <>
2024-12-17 13:25:31 +08:00
eball
20b491a9f7 Update release.yaml 2024-12-16 19:53:29 +08:00
eball
01f6a152f7 Update release-daily.yaml 2024-12-16 19:52:43 +08:00
simon
517d926917 knowledge and download: support LarePass donload and fix bilibili extract bug (#748)
* knowledge v0.1.57

* knowledge
2024-12-14 22:39:55 +08:00
hysyeah
3d0528e7cc app-service: fix get metric values error in some situation (#747) 2024-12-14 00:17:01 +08:00
eball
50c6f476ab olares: add .DS_Store to gitignore (#744)
* olares: update gitignore

* Delete apps/download/.DS_Store

* Delete apps/download/config/user/helm-charts/.DS_Store

* remove .DS_Store

---------

Co-authored-by: liuyu <>
2024-12-13 13:59:21 +08:00
dkeven
80bad48cc2 installer: detect public ip during installation (#741) 2024-12-12 19:50:27 +08:00
Sai
101cd5f9d0 market, app-service: support old version install app (#738)
The market version will be upgraded to 0.3.0 to support users on non-latest versions of operating systems in accessing historical versions of the app. This upgrade aims to enhance user experience by ensuring that even those on older systems can retrieve the necessary app versions.

Key Changes
Version Upgrade: The market version will be updated to 0.3.0.
Support for Historical Versions: Users on non-latest operating systems will be able to access historical versions of the app.
This upgrade is designed to better meet user needs and ensure that all users can effectively utilize our application.
2024-12-11 16:19:02 +08:00
dkeven
f4e9c6f440 installer: use the logger from std lib at cmd entry (#735)
fix(installer): use the logger from std lib at cmd entry
2024-12-11 16:14:59 +08:00
liuyu
22440df66c olares: update runner tags in workflow action 2024-12-11 14:23:02 +08:00
eball
46fd7de998 olares: revert nvshare to v0.0.1 (#733)
Co-authored-by: liuyu <>
2024-12-10 21:42:03 +08:00
lovehunter9
623822bcef files: fix the bug when copying name with space for src xor dst is sync (#732)
* bugfix: fix the bug when copying name with sapce for src xor dst is sync

* files: fix the decoding issue of folders containing spaces

---------

Co-authored-by: huaiyuan <1029848564@qq.com>
2024-12-10 21:41:35 +08:00
liuyu
1ef0c10a0b olares: bump ci version to 1.12.0 2024-12-10 14:23:19 +08:00
106 changed files with 21075 additions and 1503 deletions

View File

@@ -7,7 +7,7 @@ Title: <subsystem>: <what changed>
* **Target Version for Merge**
<!-- Specify the version to which these changes need to be merged -->
* ***Related Issues**
* **Related Issues**
<!-- Reference any related issues here, if applicable -->
* **PRs Involving Sub-Systems**

View File

@@ -20,7 +20,7 @@ jobs:
bash scripts/build-redis.sh linux/amd64
push-arm64:
runs-on: self-hosted
runs-on: [self-hosted, linux, ARM64]
steps:
- name: Clean

20
.github/workflows/build-wsl2326.yaml vendored Normal file
View File

@@ -0,0 +1,20 @@
name: Build and Upload WSL MSI
on:
workflow_dispatch:
jobs:
push:
runs-on: ubuntu-latest
steps:
- name: "Checkout source code"
uses: actions/checkout@v3
# test
- env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"
run: |
bash scripts/build-wsl-install-msi.sh

View File

@@ -68,22 +68,6 @@ jobs:
ref: ${{ github.event.pull_request.head.ref }}
repository: ${{ github.event.pull_request.head.repo.full_name }}
- name: Install coscmd
run: pip install coscmd
- name: Configure coscmd
env:
TENCENT_SECRET_ID: ${{ secrets.TENCENT_SECRET_ID }}
TENCENT_SECRET_KEY: ${{ secrets.TENCENT_SECRET_KEY }}
COS_BUCKET: ${{ secrets.COS_BUCKET }}
COS_REGION: ${{ secrets.COS_REGION }}
END_POINT: ${{ secrets.END_POINT }}
run: |
coscmd config -a $TENCENT_SECRET_ID \
-s $TENCENT_SECRET_KEY \
-b $COS_BUCKET \
-r $COS_REGION
# test
- env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
@@ -93,7 +77,7 @@ jobs:
bash scripts/image-manifest.sh && bash scripts/upload-images.sh .manifest/images.mf
push-image-arm64:
runs-on: self-hosted
runs-on: [self-hosted, linux, ARM64]
steps:
- name: 'Checkout source code'
@@ -103,22 +87,6 @@ jobs:
ref: ${{ github.event.pull_request.head.ref }}
repository: ${{ github.event.pull_request.head.repo.full_name }}
- name: Install coscmd
run: pip install coscmd
- name: Configure coscmd
env:
TENCENT_SECRET_ID: ${{ secrets.TENCENT_SECRET_ID }}
TENCENT_SECRET_KEY: ${{ secrets.TENCENT_SECRET_KEY }}
COS_BUCKET: ${{ secrets.COS_BUCKET }}
COS_REGION: ${{ secrets.COS_REGION }}
END_POINT: ${{ secrets.END_POINT }}
run: |
export PATH=$PATH:/usr/local/bin:/home/ubuntu/.local/bin
coscmd config -m 10 -p 10 -a $TENCENT_SECRET_ID \
-s $TENCENT_SECRET_KEY \
-b $COS_BUCKET \
-r $COS_REGION
- env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
@@ -140,22 +108,6 @@ jobs:
ref: ${{ github.event.pull_request.head.ref }}
repository: ${{ github.event.pull_request.head.repo.full_name }}
- name: Install coscmd
run: pip install coscmd
- name: Configure coscmd
env:
TENCENT_SECRET_ID: ${{ secrets.TENCENT_SECRET_ID }}
TENCENT_SECRET_KEY: ${{ secrets.TENCENT_SECRET_KEY }}
COS_BUCKET: ${{ secrets.COS_BUCKET }}
COS_REGION: ${{ secrets.COS_REGION }}
END_POINT: ${{ secrets.END_POINT }}
run: |
coscmd config -a $TENCENT_SECRET_ID \
-s $TENCENT_SECRET_KEY \
-b $COS_BUCKET \
-r $COS_REGION
# test
- env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
@@ -165,7 +117,7 @@ jobs:
bash scripts/deps-manifest.sh && bash scripts/upload-deps.sh
push-deps-arm64:
runs-on: self-hosted
runs-on: [self-hosted, linux, ARM64]
steps:
- name: "Checkout source code"
@@ -178,20 +130,6 @@ jobs:
- name: Install coscmd
run: pip install coscmd
- name: Configure coscmd
env:
TENCENT_SECRET_ID: ${{ secrets.TENCENT_SECRET_ID }}
TENCENT_SECRET_KEY: ${{ secrets.TENCENT_SECRET_KEY }}
COS_BUCKET: ${{ secrets.COS_BUCKET }}
COS_REGION: ${{ secrets.COS_REGION }}
END_POINT: ${{ secrets.END_POINT }}
run: |
export PATH=$PATH:/usr/local/bin:/home/ubuntu/.local/bin
coscmd config -m 10 -p 10 -a $TENCENT_SECRET_ID \
-s $TENCENT_SECRET_KEY \
-b $COS_BUCKET \
-r $COS_REGION
# test
- env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
@@ -217,7 +155,7 @@ jobs:
- name: 'Test tag version'
id: vars
run: |
v=1.11.0-$(echo $RANDOM)
v=1.12.0-$(echo $RANDOM)
echo "tag_version=$v" >> $GITHUB_OUTPUT
- name: Package installer

View File

@@ -5,7 +5,7 @@ on:
jobs:
push:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
steps:
- name: "Checkout source code"
@@ -36,7 +36,7 @@ jobs:
bash scripts/deps-manifest.sh && bash scripts/upload-deps.sh
push-arm64:
runs-on: self-hosted
runs-on: [self-hosted, linux, ARM64]
steps:
- name: "Checkout source code"

View File

@@ -5,7 +5,7 @@ on:
jobs:
push:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
steps:
- name: "Checkout source code"
@@ -36,7 +36,7 @@ jobs:
bash scripts/image-manifest.sh && bash scripts/upload-images.sh .manifest/images.mf
push-arm64:
runs-on: self-hosted
runs-on: [self-hosted, linux, ARM64]
steps:
- name: "Checkout source code"

View File

@@ -10,28 +10,12 @@ on:
jobs:
push-images:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
steps:
- name: 'Checkout source code'
uses: actions/checkout@v3
- name: Install coscmd
run: pip install coscmd
- name: Configure coscmd
env:
TENCENT_SECRET_ID: ${{ secrets.TENCENT_SECRET_ID }}
TENCENT_SECRET_KEY: ${{ secrets.TENCENT_SECRET_KEY }}
COS_BUCKET: ${{ secrets.COS_BUCKET }}
COS_REGION: ${{ secrets.COS_REGION }}
END_POINT: ${{ secrets.END_POINT }}
run: |
coscmd config -a $TENCENT_SECRET_ID \
-s $TENCENT_SECRET_KEY \
-b $COS_BUCKET \
-r $COS_REGION
- env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
@@ -40,29 +24,12 @@ jobs:
bash scripts/image-manifest.sh && bash scripts/upload-images.sh .manifest/images.mf
push-images-arm64:
runs-on: self-hosted
runs-on: [self-hosted, linux, ARM64]
steps:
- name: 'Checkout source code'
uses: actions/checkout@v3
- name: Install coscmd
run: pip install coscmd
- name: Configure coscmd
env:
TENCENT_SECRET_ID: ${{ secrets.TENCENT_SECRET_ID }}
TENCENT_SECRET_KEY: ${{ secrets.TENCENT_SECRET_KEY }}
COS_BUCKET: ${{ secrets.COS_BUCKET }}
COS_REGION: ${{ secrets.COS_REGION }}
END_POINT: ${{ secrets.END_POINT }}
run: |
export PATH=$PATH:/usr/local/bin:/home/ubuntu/.local/bin
coscmd config -m 10 -p 10 -a $TENCENT_SECRET_ID \
-s $TENCENT_SECRET_KEY \
-b $COS_BUCKET \
-r $COS_REGION
- env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
@@ -78,22 +45,6 @@ jobs:
- name: "Checkout source code"
uses: actions/checkout@v3
- name: Install coscmd
run: pip install coscmd
- name: Configure coscmd
env:
TENCENT_SECRET_ID: ${{ secrets.TENCENT_SECRET_ID }}
TENCENT_SECRET_KEY: ${{ secrets.TENCENT_SECRET_KEY }}
COS_BUCKET: ${{ secrets.COS_BUCKET }}
COS_REGION: ${{ secrets.COS_REGION }}
END_POINT: ${{ secrets.END_POINT }}
run: |
coscmd config -a $TENCENT_SECRET_ID \
-s $TENCENT_SECRET_KEY \
-b $COS_BUCKET \
-r $COS_REGION
# test
- env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
@@ -103,29 +54,12 @@ jobs:
bash scripts/deps-manifest.sh && bash scripts/upload-deps.sh
push-deps-arm64:
runs-on: self-hosted
runs-on: [self-hosted, linux, ARM64]
steps:
- name: "Checkout source code"
uses: actions/checkout@v3
- name: Install coscmd
run: pip install coscmd
- name: Configure coscmd
env:
TENCENT_SECRET_ID: ${{ secrets.TENCENT_SECRET_ID }}
TENCENT_SECRET_KEY: ${{ secrets.TENCENT_SECRET_KEY }}
COS_BUCKET: ${{ secrets.COS_BUCKET }}
COS_REGION: ${{ secrets.COS_REGION }}
END_POINT: ${{ secrets.END_POINT }}
run: |
export PATH=$PATH:/usr/local/bin:/home/ubuntu/.local/bin
coscmd config -m 10 -p 10 -a $TENCENT_SECRET_ID \
-s $TENCENT_SECRET_KEY \
-b $COS_BUCKET \
-r $COS_REGION
# test
- env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
@@ -144,7 +78,7 @@ jobs:
- name: 'Daily tag version'
id: vars
run: |
v=1.11.0-$(date +"%Y%m%d")
v=1.12.0-$(date +"%Y%m%d")
echo "tag_version=$v" >> $GITHUB_OUTPUT
- name: 'Checkout source code'
@@ -154,29 +88,6 @@ jobs:
run: |
bash scripts/build.sh ${{ steps.vars.outputs.tag_version }}
- name: Install coscmd
run: pip install coscmd
- name: Configure coscmd
env:
TENCENT_SECRET_ID: ${{ secrets.TENCENT_SECRET_ID }}
TENCENT_SECRET_KEY: ${{ secrets.TENCENT_SECRET_KEY }}
COS_BUCKET: ${{ secrets.COS_BUCKET }}
COS_REGION: ${{ secrets.COS_REGION }}
END_POINT: ${{ secrets.END_POINT }}
run: |
export PATH=$PATH:/usr/local/bin:/home/ubuntu/.local/bin
coscmd config -m 10 -p 10 -a $TENCENT_SECRET_ID \
-s $TENCENT_SECRET_KEY \
-b $COS_BUCKET \
-r $COS_REGION
# - name: Upload to COS
# run: |
# md5sum install-wizard-v${{ steps.vars.outputs.tag_version }}.tar.gz > install-wizard-v${{ steps.vars.outputs.tag_version }}.md5sum.txt && \
# coscmd upload ./install-wizard-v${{ steps.vars.outputs.tag_version }}.md5sum.txt /install-wizard-v${{ steps.vars.outputs.tag_version }}.md5sum.txt
# coscmd upload ./install-wizard-v${{ steps.vars.outputs.tag_version }}.tar.gz /install-wizard-v${{ steps.vars.outputs.tag_version }}.tar.gz
- name: Upload to S3
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
@@ -199,7 +110,7 @@ jobs:
- name: 'Daily tag version'
id: vars
run: |
v=1.11.0-$(date +"%Y%m%d")
v=1.12.0-$(date +"%Y%m%d")
echo "tag_version=$v" >> $GITHUB_OUTPUT
echo "version_md5sum=$(curl -sSfL https://dc3p1870nn3cj.cloudfront.net/install-wizard-v${v}.md5sum.txt|awk '{print $1}')" >> $GITHUB_OUTPUT
@@ -230,6 +141,7 @@ jobs:
build/installer/publicInstaller.sh
build/installer/install.sh
build/installer/install.ps1
build/installer/joincluster.sh
build/installer/publicAddnode.sh
build/installer/version.hint
build/installer/publicRestoreInstaller.sh

View File

@@ -10,7 +10,7 @@ on:
jobs:
push:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
steps:
- name: 'Checkout source code'
@@ -18,22 +18,6 @@ jobs:
with:
ref: ${{ github.event.inputs.tags }}
- name: Install coscmd
run: pip install coscmd
- name: Configure coscmd
env:
TENCENT_SECRET_ID: ${{ secrets.TENCENT_SECRET_ID }}
TENCENT_SECRET_KEY: ${{ secrets.TENCENT_SECRET_KEY }}
COS_BUCKET: ${{ secrets.COS_BUCKET }}
COS_REGION: ${{ secrets.COS_REGION }}
END_POINT: ${{ secrets.END_POINT }}
run: |
coscmd config -a $TENCENT_SECRET_ID \
-s $TENCENT_SECRET_KEY \
-b $COS_BUCKET \
-r $COS_REGION
- env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
@@ -42,7 +26,7 @@ jobs:
bash scripts/image-manifest.sh && bash scripts/upload-images.sh .manifest/images.mf
push-arm64:
runs-on: self-hosted
runs-on: [self-hosted, linux, ARM64]
steps:
- name: 'Checkout source code'
@@ -50,23 +34,6 @@ jobs:
with:
ref: ${{ github.event.inputs.tags }}
- name: Install coscmd
run: pip install coscmd
- name: Configure coscmd
env:
TENCENT_SECRET_ID: ${{ secrets.TENCENT_SECRET_ID }}
TENCENT_SECRET_KEY: ${{ secrets.TENCENT_SECRET_KEY }}
COS_BUCKET: ${{ secrets.COS_BUCKET }}
COS_REGION: ${{ secrets.COS_REGION }}
END_POINT: ${{ secrets.END_POINT }}
run: |
export PATH=$PATH:/usr/local/bin:/home/ubuntu/.local/bin
coscmd config -m 10 -p 10 -a $TENCENT_SECRET_ID \
-s $TENCENT_SECRET_KEY \
-b $COS_BUCKET \
-r $COS_REGION
- env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
@@ -89,29 +56,6 @@ jobs:
run: |
bash scripts/build.sh ${{ github.event.inputs.tags }}
- name: Install coscmd
run: pip install coscmd
- name: Configure coscmd
env:
TENCENT_SECRET_ID: ${{ secrets.TENCENT_SECRET_ID }}
TENCENT_SECRET_KEY: ${{ secrets.TENCENT_SECRET_KEY }}
COS_BUCKET: ${{ secrets.COS_BUCKET }}
COS_REGION: ${{ secrets.COS_REGION }}
END_POINT: ${{ secrets.END_POINT }}
run: |
export PATH=$PATH:/usr/local/bin:/home/ubuntu/.local/bin
coscmd config -m 10 -p 10 -a $TENCENT_SECRET_ID \
-s $TENCENT_SECRET_KEY \
-b $COS_BUCKET \
-r $COS_REGION
# - name: Upload to COS
# run: |
# md5sum install-wizard-v${{ github.event.inputs.tags }}.tar.gz > install-wizard-v${{ github.event.inputs.tags }}.md5sum.txt && \
# coscmd upload ./install-wizard-v${{ github.event.inputs.tags }}.md5sum.txt /install-wizard-v${{ github.event.inputs.tags }}.md5sum.txt
# coscmd upload ./install-wizard-v${{ github.event.inputs.tags }}.tar.gz /install-wizard-v${{ github.event.inputs.tags }}.tar.gz
- name: Upload to S3
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
@@ -174,6 +118,7 @@ jobs:
build/installer/publicInstaller.latest.ps1
build/installer/install.ps1
build/installer/publicAddnode.sh
build/installer/joincluster.sh
build/installer/version.hint
build/installer/publicRestoreInstaller.sh
prerelease: true

1
.gitignore vendored
View File

@@ -27,3 +27,4 @@ install-wizard-*.tar.gz
olares-cli-*.tar.gz
!ks-console-*.tgz
.vscode
.DS_Store

137
README.md
View File

@@ -1,6 +1,6 @@
<div align="center">
# Olares - Your Sovereign Cloud, an Open-Source Self-Hosted Alternative to Public Clouds <!-- omit in toc -->
# Olares: An Open-Source Sovereign Cloud OS for Local AI<!-- omit in toc -->
[![Mission](https://img.shields.io/badge/Mission-Let%20people%20own%20their%20data%20again-purple)](#)<br/>
[![Last Commit](https://img.shields.io/github/last-commit/beclab/olares)](https://github.com/beclab/olares/commits/main)
@@ -13,11 +13,12 @@
<p>
<a href="./README.md"><img alt="Readme in English" src="https://img.shields.io/badge/English-FFFFFF"></a>
<a href="./README_CN.md"><img alt="Readme in Chinese" src="https://img.shields.io/badge/简体中文-FFFFFF"></a>
<a href="./README_JP.md"><img alt="Readme in Japanese" src="https://img.shields.io/badge/日本語-FFFFFF"></a>
</p>
</div>
https://github.com/user-attachments/assets/5ea2fe30-7bd2-49ed-be26-e12f1d5d8cb1
https://github.com/user-attachments/assets/3089a524-c135-4f96-ad2b-c66bf4ee7471
*Build your local AI assistants, sync data across places, self-host your workspace, stream your own media, and more—all in your sovereign cloud made possible by Olares.*
@@ -30,32 +31,28 @@ https://github.com/user-attachments/assets/5ea2fe30-7bd2-49ed-be26-e12f1d5d8cb1
</p>
> [!IMPORTANT]
> We just finished our rebranding from Terminus to Olares recently. For more information, refer to our [rebranding blog](https://olares.medium.com/terminus-is-now-olares-2c3bf782f9d1).
## Table of Contents <!-- omit in toc -->
- [Introduction](#introduction)
- [Motivation and design](#motivation-and-design)
- [Tech stacks](#tech-stacks)
- [Features](#features)
- [Feature comparison](#feature-comparison)
- [Getting started](#getting-started)
- [Project navigation](#project-navigation)
- [Contributing to Olares](#contributing-to-olares)
- [Community \& contact](#community--contact)
- [Staying ahead](#staying-ahead)
- [Special thanks](#special-thanks)
> We just finished our rebranding from Terminus to Olares recently. For more information, refer to our [rebranding blog](https://blog.olares.xyz/terminus-is-now-olares/).
## Introduction
Olares is the sovereign cloud that puts you in control. It's an open-source, self-hosted alternative to public clouds like AWS, built to reclaim your data ownership and privacy. By combining the power of Kubernetes with a streamlined interface, Olares enables you to take full control of your data and computing resources. Whether you're managing a homelab, hosting applications, or safeguarding your privacy, Olares delivers the flexibility and capabilities of public clouds, without compromising privacy or security.
Convert your hardware into an AI home server with Olares, an open-source sovereign cloud OS built for local AI.
Typical use cases of Olares include:
- **Run leading AI models on your term**s: Effortlessly host powerful open AI models like LLaMA, Stable Diffusion, Whisper, and Flux.1 directly on your hardware, giving you full control over your AI environment.
- **Deploy with ease**: Discover and install a wide range of open-source AI apps from Olares Market in a few clicks. No more complicated configuration or setup.
- **Access anytime, anywhere**: Access your AI apps and models through a browser whenever and wherever you need them.
- **Integrated AI for smarter AI experience**: Using a [Model Context Protocol](https://spec.modelcontextprotocol.io/specification/) (MCP)-like mechanism, Olares seamlessly connects AI models with AI apps and your private data sets. This creates highly personalized, context-aware AI interactions that adapt to your needs.
🤖 **Local AI**: Host and run world-class open-source AI models locally, including large language models, image generation, and speech recognition. Create custom AI assistants that integrate seamlessly with your personal data and applications, all while ensuring enhanced privacy and control. <br>
💻**Personal data repository**: Securely store, sync, and manage your photos, documents, and important files in a unified storage and access anywhere. <br>
> 🌟 *Star us to receive instant notifications about new releases and updates.*
🛠️ **Self-hosted workspace**: Create a free, powerful workspace for your team or family with open source self-hosted alternatives. <br>
## Why Olares?
Here is why and where you can count on Olares for private, powerful, and secure sovereign cloud experience:
🤖 **Edge AI**: Run cutting-edge open AI models locally, including large language models, computer vision, and speech recognition. Create private AI services tailored to your data for enhanced functionality and privacy. <br>
📊 **Personal data repository**: Securely store, sync, and manage your important files, photos, and documents across devices and locations.<br>
🚀 **Self-hosted workspace**: Build a free collaborative workspace for your team using secure, open-source SaaS alternatives.<br>
🎥 **Private media server**: Host your own streaming services with your personal media collections. <br>
@@ -65,21 +62,35 @@ Typical use cases of Olares include:
📚 **Learning platform**: Explore self-hosting, container orchestration, and cloud technologies hands-on.
## Motivation and design
## Getting started
We believe the current state of the internet, where user data is centralized and exploited by monopolistic corporations, is deeply flawed. Our goal is to empower individuals with true data ownership and control.
### System compatibility
Olares has been tested and verified on the following platforms:
Olares provides a next-generation decentralized Internet framework consisting of the following three integral components:
| Platform | Operating system | Notes |
|---------------------|--------------------------------------|-------------------------------------------------------|
| Linux | Ubuntu 20.04 LTS or later <br/> Debian 11 or later | |
| Raspberry Pi | RaspbianOS | Verified on Raspberry Pi 4 Model B and Raspberry Pi 5 |
| Windows | Windows 11 23H2 or later <br/>Windows 10 22H2 or later<br/> WSL2 | |
| Mac | Monterey (12) or later | |
| Proxmox VE (PVE) | Proxmox Virtual Environment 8.0 | |
- **Snowinning Protocol**: A decentralized identity and reputation system that integrates decentralized identifiers (DIDs), verifiable credentials (VCs), and reputation data.
- **Olares OS**: An one-stop self-hosted operating system running on edge devices, allowing users to host their own data and applications.
- **LarePass**: A comprehensive client software that securely bridges users to their Olares systems. It offers remote access, identity and device management, data storage, and productivity tools, providing a seamless interface for all Olares interactions.
> **Note**
>
> If you successfully install Olares on an operating system that is not listed in the compatibility table, please let us know! You can [open an issue](https://github.com/beclab/Olares/issues/new) or submit a pull request on our GitHub repository.
## Tech stacks
### Set up Olares
To get started with Olares on your own device, follow the [Getting Started Guide](https://docs.olares.xyz/manual/get-started/) for step-by-step instructions.
Public clouds have IaaS, PaaS, and SaaS layers. Olares provides open-source alternatives to these layers.
## Architecture
![Tech Stacks](https://file.bttcdn.com/github/terminus/v2/tech-stack-olares.jpeg)
Olares' architecture is based on two core principles:
- Adopts an Android-like approach to control software permissions and interactivity, ensuring smooth and secure system operations.
- Leverages cloud-native technologies to manage hardware and middleware services efficiently.
![Olares Architecture](https://file.bttcdn.com/github/terminus/v2/olares-arch-3.png)
For detailed description of each component, refer to [Olares architecture](https://docs.olares.xyz/manual/system-architecture.html).
## Features
@@ -94,62 +105,6 @@ Olares offers a wide array of features designed to enhance security, ease of use
- **Seamless anywhere access**: Access your devices from anywhere using dedicated clients for mobile, desktop, and browsers.
- **Development tools**: Comprehensive development tools for effortless application development and porting.
## Feature comparison
To help you understand how Olares stands out in the landscape, we've created a comparison table that highlights its features alongside those of other leading solutions in the market.
**Legend:**
- 🚀: **Auto**, indicates that the system completes the task automatically.
- ✅: **Yes**, indicates that users without a developer background can complete the setup through the product's UI prompts.
- 🛠️: **Manual Configuration**, indicates that even users with an engineering background need to refer to tutorials to complete the setup.
- ❌: **No**, indicates that the feature is not supported.
| | Olares | Synology | TrueNAS | CasaOS | Unraid |
| --- | --- | --- | --- | --- | --- |
| Source Code License | Olares License | Closed | GPL 3.0 | Apache 2.0 | Closed |
| Built On | Kubernetes | Linux | Kubernetes | Docker | Docker |
| Multi-Node | ✅ | ❌ | ✅ | ❌ | ❌ |
| Build-in Apps | ✅ (Rich desktop apps) | ✅ (Rich desktop apps) | ❌ (CLI) | ✅ (Simple desktop apps) | ✅ (Dashboard) |
| Free Domain Name | ✅ | ✅ | ❌ | ❌ | ❌ |
| Auto SSL Certificate | 🚀 | ✅ | 🛠️ | 🛠️ | 🛠️ |
| Reverse Proxy | 🚀 | ✅ | 🛠️ | 🛠️ | 🛠️ |
| VPN Management | 🚀 | 🛠️ | 🛠️ | 🛠️ | 🛠️ |
| Graded App Entrance | 🚀 | 🛠️ | 🛠️ | 🛠️ | 🛠️ |
| Multi-User Management | ✅ User management <br>🚀 Resource isolation | ✅ User management<br>🛠️ Resource isolation | ✅ User management<br>🛠️ Resource isolation | ❌ | ✅ User management <br>🛠️ Resource isolation |
| Single Login for All Apps | 🚀 | ❌ | ❌ | ❌ | ❌ |
| Cross-Node Storage | 🚀 (Juicefs+<br>MinIO) | ❌ | ❌ | ❌ | ❌ |
| Database Solution | 🚀 (Built-in cloud-native solution) | 🛠️ | 🛠️ | 🛠️ | 🛠️ |
| Disaster Recovery | 🚀 (MinIO's [**Erasure Coding**](https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html)**)** | ✅ RAID | ✅ RAID | ✅ RAID | ✅ Unraid Storage |
| Backup | ✅ App Data <br>✅ User Data | ✅ User Data | ✅ User Data | ✅ User Data | ✅ User Data |
| App Sandboxing | ✅ | ❌ | ❌ (K8S's namespace) | ❌ | ❌ |
| App Ecosystem | ✅ (Official + third-party) | ✅ (Majorly official apps) | ✅ (Official + third-party submissions) | ✅ Majorly official apps | ✅ (Community app market) |
| Developer Friendly | ✅ IDE <br>✅ CLI <br>✅ SDK <br>✅ Doc | ✅ CLI <br>✅ SDK <br>✅ Doc | ✅ CLI <br>✅ Doc | ✅ CLI <br>✅ Doc | ✅ Doc |
| Local LLM Hosting | 🚀 | 🛠️ | 🛠️ | 🛠️ | 🛠️ |
| Local LLM app development | 🚀 | 🛠️ | 🛠️ | 🛠️ | 🛠️ |
| Client Platforms | ✅ Android <br>✅ iOS <br>✅ Windows <br>✅ Mac <br>✅ Chrome Plugin | ✅ Android <br>✅ iOS | ❌ | ❌ | ❌ |
| Client Functionality | ✅ (All-in-one client app) | ✅ (14 separate client apps) | ❌ | ❌ | ❌ |
## Getting started
### System compatibility
Olares is available for Linux, Raspberry Pi, Mac, and Windows. It has been tested and verified on the following systems:
| Platform | Operating system | Notes |
|---------------------|--------------------------------------|-------------------------------------------------------|
| Linux | Ubuntu 24.04 <br/> Debian 12.8 | |
| Raspberry Pi | RaspbianOS | Verified on Raspberry Pi 4 Model B and Raspberry Pi 5 |
| Windows | Windows 11 23H2 <br/>Windows 10 22H2 | |
| Mac (Apple silicon) | macOS Ventura 13.3.1 | |
| Proxmox VE (PVE) | Proxmox Virtual Environment 8.0 | |
> **Note**
>
> If you successfully install Olares on an operating system that is not listed in the compatibility table, please let us know! You can [open an issue](https://github.com/beclab/Olares/issues/new) or submit a pull request on our GitHub repository.
### Set up Olares
To get started with Olares on your own device, follow the [Getting Started Guide](https://docs.olares.xyz/manual/get-started/) for step-by-step instructions.
## Project navigation
Olares consists of numerous code repositories publicly available on GitHub. The current repository is responsible for the final compilation, packaging, installation, and upgrade of the operating system, while specific changes mostly take place in their corresponding repositories.
@@ -240,14 +195,6 @@ https://docs.olares.xyz/developer/contribute/olares.html
* [**GitHub Issues**](https://github.com/beclab/olares/issues). Best for filing bugs you encounter using Olares and submitting feature proposals.
* [**Discord**](https://discord.com/invite/BzfqrgQPDK). Best for sharing anything Olares.
## Staying ahead
Star the Olares project to receive instant notifications about new releases and updates.
![star us](https://file.bttcdn.com/github/terminus/terminus.git.v2.gif)
## Special thanks
The Olares project has incorporated numerous third-party open source projects, including: [Kubernetes](https://kubernetes.io/), [Kubesphere](https://github.com/kubesphere/kubesphere), [Padloc](https://padloc.app/), [K3S](https://k3s.io/), [JuiceFS](https://github.com/juicedata/juicefs), [MinIO](https://github.com/minio/minio), [Envoy](https://github.com/envoyproxy/envoy), [Authelia](https://github.com/authelia/authelia), [Infisical](https://github.com/Infisical/infisical), [Dify](https://github.com/langgenius/dify), [Seafile](https://github.com/haiwen/seafile),[HeadScale](https://headscale.net/), [tailscale](https://tailscale.com/), [Redis Operator](https://github.com/spotahome/redis-operator), [Nitro](https://nitro.jan.ai/), [RssHub](http://rsshub.app/), [predixy](https://github.com/joyieldInc/predixy), [nvshare](https://github.com/grgalex/nvshare), [LangChain](https://www.langchain.com/), [Quasar](https://quasar.dev/), [TrustWallet](https://trustwallet.com/), [Restic](https://restic.net/), [ZincSearch](https://zincsearch-docs.zinc.dev/), [filebrowser](https://filebrowser.org/), [lego](https://go-acme.github.io/lego/), [Velero](https://velero.io/), [s3rver](https://github.com/jamhall/s3rver), [Citusdata](https://www.citusdata.com/).

View File

@@ -1,6 +1,6 @@
<div align="center">
# Olares - 您的主权云,一个开源自托管的公有云替代方案<!-- omit in toc -->
# Olares - 为本地 AI 打造的开源私有云操作系统<!-- omit in toc -->
[![Mission](https://img.shields.io/badge/Mission-Let%20people%20own%20their%20data%20again-purple)](#)<br/>
[![Last Commit](https://img.shields.io/github/last-commit/beclab/terminus)](https://github.com/beclab/olares/commits/main)
@@ -13,12 +13,13 @@
<p>
<a href="./README.md"><img alt="Readme in English" src="https://img.shields.io/badge/English-FFFFFF"></a>
<a href="./README_CN.md"><img alt="Readme in Chinese" src="https://img.shields.io/badge/简体中文-FFFFFF"></a>
<a href="./README_JP.md"><img alt="Readme in Japanese" src="https://img.shields.io/badge/日本語-FFFFFF"></a>
</p>
</div>
[![cover](https://file.bttcdn.com/github/terminus/desktop-dark.jpeg)](https://github.com/user-attachments/assets/5ea2fe30-7bd2-49ed-be26-e12f1d5d8cb1)
https://github.com/user-attachments/assets/3089a524-c135-4f96-ad2b-c66bf4ee7471
*Olares 让你体验更多可能:构建个人 AI 助理、随时随地同步数据、自托管团队协作空间、打造私人影视厅——无缝整合你的数字生活。*
@@ -30,31 +31,25 @@
<a href="https://space.olares.xyz">Olares Space</a>
</p>
## 目录 <!-- omit in toc -->
- [介绍](#介绍)
- [动机与设计](#动机与设计)
- [技术栈](#技术栈)
- [功能](#功能)
- [功能对比](#功能对比)
- [快速开始](#快速开始)
- [项目目录](#项目目录)
- [社区贡献](#社区贡献)
- [社区支持](#社区支持)
- [持续关注](#持续关注)
- [特别感谢](#特别感谢)
## 介绍
Olares 是一个让您完全掌控的主权云平台。它是一个开源的、自托管的公有云替代方案旨在帮助您重获数据所有权和隐私控制权。通过将Kubernetes的强大功能与简化的用户界面相结合Olares使您能够完全掌控自己的数据和计算资源。无论您是在管理家庭实验环境、部署应用程序还是保护个人隐私Olares都能提供与公有云同等的灵活性和功能同时确保您的隐私和安全不受损害
Olares 是为本地端侧 AI 打造的开源私有云操作系统,可轻松将您的硬件转变为 AI 家庭服务器
- 运行领先 AI 模型:在您的硬件上轻松部署并掌控 LLaMA、Stable Diffusion、Whisper 和 Flux.1 等顶尖开源 AI 模型。
- 轻松部署 AI 应用:通过 Olares 应用市场,轻松部署丰富多样的开源 AI 应用。无需复杂繁琐的配置。
- 随心访问:通过浏览器随时随地访问你的 AI 应用。
- 更智能的专属 AI 体验:通过类似[模型上下文协议](https://spec.modelcontextprotocol.io/specification/)Model Context Protocol, MCP的机制Olares 可让 AI 模型无缝连接 AI 应用与您的私人数据集,提供基于任务场景的个性化 AI 体验。
Olares 支持以下应用场景:
> 为 Olares 点亮 🌟 以及时获取新版本和更新的通知。
## 为什么选择 Olares?
在以下场景中Olares 为您带来私密、强大且安全的私有云体验:
🤖**本地 AI 助手**:在本地部署运行顶级开源 AI 模型,涵盖语言处理、图像生成和语音识别等领域。根据个人需求定制 AI 助手,确保数据隐私和控制权均处于自己手中。<br>
💻**个人数据仓库**:所有个人文件,包括照片、文档和重要资料,都可以在这个安全的统一平台上存储和同步,随时随地都能方便地访问。<br>
🛠️**自托管工作空间**:利用开源解决方案,无需成本即可为家庭或工作团队搭建一个功能强大的工作空间。<br>
🛠️**自托管工作空间**:利用开源 SaaS 平替方案,无需成本即可为家庭或工作团队搭建一个功能强大的工作空间。<br>
🎥**私人媒体服务器**:用自己的视频和音乐库搭建一个私人流媒体服务,随时享受个性化的娱乐体验。<br>
@@ -64,22 +59,39 @@ Olares 支持以下应用场景:
📚**学习探索**:深入学习自托管服务、容器技术和云计算,并上手实践。<br>
## 动机与设计
## 快速开始
我们深知当前互联网的局限性——用户的数据被主流互联网或云服务公司掌控,并用于其商业利益。我们致力于改变这一现状,希望通过 Olares 赋予用户真正的数据所有权和控制权。
### 系统兼容性
Olares 已在以下平台完成测试验证:
Olares 为此提供了一套全新的去中心化互联网框架,主要包括以下三个部分:
| 平台 | 操作系统 | 备注 |
|---------------------|--------------------------------------|-------------------------------------------------------|
| Linux | Ubuntu 20.04 LTS 及以上 <br/> Debian 11 及以上 | |
| Raspberry Pi | RaspbianOS | 已在 Raspberry Pi 4 Model B 和 Raspberry Pi 5 上验证 |
| Windows | Windows 11 23H2 及以上 <br/>Windows 10 22H2 及以上 <br/>WSL2 | |
| Mac | macOS Monterey (12) 及以上 | |
| Proxmox VE (PVE) | Proxmox Virtual Environment 8.0 | |
- **Snowinning Protocol**一个去中心化的身份和声誉系统融合了去中心化标识符DIDs、可验证凭证VCs以及声誉数据帮助用户在网络世界中安全地管理自己的身份。
- **Olares**:一个专为边缘设备设计的自托管操作系统,用户可以在此系统上自主托管自己的数据和应用,确保数据的私密性和安全性。
- **LarePass**:一款功能全面的客户端软件,通过安全的方式将用户与其 Olares 系统连接起来。它不仅支持远程访问、身份和设备管理,还提供数据存储和各种办公工具,让用户高效管理其日常工作和个人数据
> **注意**
>
> 如果你在未列出的系统版本上成功安装了 Olares请告诉我们你可以在 GitHub 仓库中[提交 Issue](https://github.com/beclab/Olares/issues/new) 或发起 Pull Request
## 技术栈
公有云具有基础设施即服务IaaS、平台即服务PaaS和软件即服务SaaS等层级。Olares 为这些层级提供了开源替代方案。
### 安装 Olares
![技术栈](https://file.bttcdn.com/github/terminus/v2/tech-stack-olares.jpeg)
> 当前文档仅有英文版本。
参考[快速上手指南](https://docs.olares.xyz/manual/get-started/)安装并激活 Olares。
## 功能
## 系统架构
Olares 的架构设计遵循两个核心原则:
- 参考 Android 模式,控制软件权限和交互性,确保系统的流畅性和安全性。
- 借鉴云原生技术,高效管理硬件和中间件服务。
![架构](https://file.bttcdn.com/github/terminus/v2/olares-arch-3.png)
详细描述请参考 [Olares 架构](https://docs.joinolares.cn/zh/manual/system-architecture.html)文档。
## 功能特性
Olares 提供了一系列功能,旨在提升安全性、使用便捷性以及开发的灵活性:
@@ -92,65 +104,6 @@ Olares 提供了一系列功能,旨在提升安全性、使用便捷性以及
- **无缝访问**:通过移动端、桌面端和网页浏览器客户端,从全球任何地方访问设备。
- **开发工具**:提供全面的工具支持,便于开发和移植应用,加速开发进程。
## 功能对比
为了帮您快速了解 Olares 在市场中的独特优势,我们制作了一张功能比较表,详细展示了 Olares 的功能以及与市场上其他主流解决方案的对比。
**图例:**
- 🚀: **自动** - 表示系统自动完成任务。
- ✅: **支持** - 表示无开发背景的用户可以通过产品的 UI 提示完成设置。
- 🛠️: **手动配置** - 表示即使是有工程背景的用户也需要参考教程来完成设置。
- ❌: **不支持** - 表示不支持该功能。
| | Olares | 群晖 | TrueNAS | CasaOS | Unraid |
| --- | --- | --- | --- | --- | --- |
| 源代码许可证 | Olares 许可证 | 闭源 | GPL 3.0 | Apache 2.0 | 闭源 |
| 开发 | Kubernetes | Linux | Kubernetes | Docker | Docker |
| 多节点支持 | ✅ | ❌ | ✅ | ❌ | ❌ |
| 内置应用 | ✅(桌面应用丰富)| ✅(桌面应用丰富) | ❌ (CLI) | ✅ (桌面应用较少) | ✅(面板) |
| 免费域名 | ✅ | ✅ | ❌ | ❌ | ❌ |
| 自动 SSL 证书 | 🚀 | ✅ | 🛠️ | 🛠️ | 🛠️ |
| 反向代理 | 🚀 | ✅ | 🛠️ | 🛠️ | 🛠️ |
| VPN 管理 | 🚀 | 🛠️ | 🛠️ | 🛠️ | 🛠️ |
| 分级应用入口 | 🚀 | 🛠️ | 🛠️ | 🛠️ | 🛠️ |
| 多用户管理 | ✅ 用户管理 <br>🚀 资源隔离 | ✅ 用户管理 <br>🛠️ 资源隔离 | ✅ 用户管理<br>🛠️ 资源隔离 | ❌ | ✅ 用户管理 <br>🛠️ 资源隔离 |
| 单一登录 | 🚀 | ❌ | ❌ | ❌ | ❌ |
| 跨节点存储 | 🚀 (Juicefs+<br>MinIO) | ❌ | ❌ | ❌ | ❌ |
| 数据库解决方案 | 🚀 (内置云原生解决方案) | 🛠️ | 🛠️ | 🛠️ | 🛠️ |
| 灾难恢复 | 🚀 (MinIO的[**纠错码**](https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html)**)** | ✅ RAID | ✅ RAID | ✅ RAID | ✅ Unraid Storage |
| 备份 | ✅ 应用数据 <br>✅ 用户数据 | ✅ 用户数据 | ✅ 用户数据 | ✅ 用户数据 | ✅ 用户数据 |
| 应用沙盒 | ✅ | ❌ | ❌ K8S的命名空间 | ❌ | ❌ |
| 应用生态系统 | ✅ (官方 + 第三方应用) | ✅ (官方应用为主) | ✅ (官方应用 + 第三方提交)| ✅ (官方应用为主) | ✅ (社区应用市场) |
| 开发者友好 | ✅ IDE <br>✅ CLI <br>✅ SDK <br>✅ 文档| ✅ CLI <br>✅ SDK <br>✅ 文档 | ✅ CLI <br>✅ 文档 | ✅ CLI <br>✅ 文档 | ✅ 文档 |
| 本地 LLM 部署 | 🚀 | 🛠️ | 🛠️ | 🛠️ | 🛠️ |
| 本地 LLM 应用开发 | 🚀 | 🛠️ | 🛠️ | 🛠️ | 🛠️ |
| 客户端 | ✅ Android <br>✅ iOS <br>✅ Windows <br>✅ Mac <br>✅ Chrome 插件 | ✅ Android <br>✅ iOS | ❌ | ❌ | ❌ |
| 客户端功能 | ✅ (一体化客户端应用) | ✅ 14个分散的客户端应用| ❌ | ❌ | ❌ |
## 快速开始
### 系统兼容性
你可以在 Linux、Raspberry Pi、Mac 和 Windows 上安装 Olares。目前已验证支持的系统环境如下
| 平台 | 操作系统 | 备注 |
|---------------------|--------------------------------------|-------------------------------------------------------|
| Linux | Ubuntu 24.04 <br/> Debian 12.8 | |
| Raspberry Pi | RaspbianOS | 已在 Raspberry Pi 4 Model B 和 Raspberry Pi 5 上验证|
| Windows | Windows 11 23H2 <br/>Windows 10 22H2 | |
| Mac (Apple Silicon) | macOS Ventura 13.3.1 | |
| Proxmox VE (PVE) | Proxmox Virtual Environment 8.0 | |
> **注意**
>
> 如果你在未列出的系统版本上成功安装了 Olares请告诉我们你可以在 GitHub 仓库中[提交 Issue](https://github.com/beclab/Olares/issues/new) 或发起 Pull Request。
### 安装 Olares
> 当前文档仅有英文版本。
参考[快速上手指南](https://docs.olares.xyz/manual/get-started/)安装并激活 Olares。
## 项目目录
Olares 包含多个在 GitHub 上公开可用的代码仓库。当前仓库负责操作系统的最终编译、打包、安装和升级,而特定的更改主要在各自对应的仓库中进行。
@@ -241,14 +194,6 @@ https://docs.olares.xyz/developer/contribute/olares.html
* [**GitHub Discussion**](https://github.com/beclab/olares/discussions) - 讨论 Olares 使用过程中的疑问。
* [**GitHub Issues**](https://github.com/beclab/olares/issues) - 报告 Olares 的遇到的问题或提出功能改进建议。
* [**Discord**](https://discord.com/invite/BzfqrgQPDK) - 日常交流,分享经验,或讨论与 Olares 相关的任何主题。
## 持续关注
关注 Olares 项目,及时获取新版本和更新的通知。
![点亮星标](https://file.bttcdn.com/github/terminus/terminus.git.v2.gif)
## 特别感谢

198
README_JP.md Normal file
View File

@@ -0,0 +1,198 @@
<div align="center">
# Olares: ローカルAIのためのオープンソース主権クラウドOS<!-- omit in toc -->
[![Mission](https://img.shields.io/badge/Mission-Let%20people%20own%20their%20data%20again-purple)](#)<br/>
[![Last Commit](https://img.shields.io/github/last-commit/beclab/olares)](https://github.com/beclab/olares/commits/main)
![Build Status](https://github.com/beclab/olares/actions/workflows/release-daily.yaml/badge.svg)
[![GitHub release (latest by date)](https://img.shields.io/github/v/release/beclab/olares)](https://github.com/beclab/olares/releases)
[![GitHub Repo stars](https://img.shields.io/github/stars/beclab/olares?style=social)](https://github.com/beclab/olares/stargazers)
[![Discord](https://img.shields.io/badge/Discord-7289DA?logo=discord&logoColor=white)](https://discord.com/invite/BzfqrgQPDK)
[![License](https://img.shields.io/badge/License-Olares-darkblue)](https://github.com/beclab/olares/blob/main/LICENSE.md)
<p>
<a href="./README.md"><img alt="Readme in English" src="https://img.shields.io/badge/English-FFFFFF"></a>
<a href="./README_CN.md"><img alt="Readme in Chinese" src="https://img.shields.io/badge/简体中文-FFFFFF"></a>
<a href="./README_JP.md"><img alt="Readme in Japanese" src="https://img.shields.io/badge/日本語-FFFFFF"></a>
</p>
</div>
https://github.com/user-attachments/assets/3089a524-c135-4f96-ad2b-c66bf4ee7471
*Olaresを使って、ローカルAIアシスタントを構築し、データを場所を問わず同期し、ワークスペースをセルフホストし、独自のメディアをストリーミングし、その他多くのことを実現できます。*
<p align="center">
<a href="https://olares.xyz">ウェブサイト</a> ·
<a href="https://docs.olares.xyz">ドキュメント</a> ·
<a href="https://olares.xyz/larepass">LarePassをダウンロード</a> ·
<a href="https://github.com/beclab/apps">Olaresアプリ</a> ·
<a href="https://space.olares.xyz">Olares Space</a>
</p>
> [!IMPORTANT]
> 最近、TerminusからOlaresへのリブランディングを完了しました。詳細については、[リブランディングブログ](https://blog.olares.xyz/terminus-is-now-olares/)をご覧ください。
Olaresを使用して、ハードウェアをAIホームサーバーに変換します。Olaresは、ローカルAIのためのオープンソース主権クラウドOSです。
- **最先端のAIモデルを自分の条件で実行**: LLaMA、Stable Diffusion、Whisper、Flux.1などの強力なオープンAIモデルをハードウェア上で簡単にホストし、AI環境を完全に制御します。
- **簡単にデプロイ**: Olares Marketから幅広いオープンソースAIアプリを数クリックで発見してインストールします。複雑な設定やセットアップは不要です。
- **いつでもどこでもアクセス**: ブラウザを通じて、必要なときにAIアプリやモデルにアクセスします。
- **統合されたAIでスマートなAI体験**: [Model Context Protocol](https://spec.modelcontextprotocol.io/specification/)MCPに似たメカニズムを使用して、OlaresはAIモデルとAIアプリ、およびプライベートデータセットをシームレスに接続します。これにより、ニーズに応じて適応する高度にパーソナライズされたコンテキスト対応のAIインタラクションが実現します。
> 🌟 *新しいリリースや更新についての通知を受け取るために、スターを付けてください。*
## なぜOlaresなのか
以下の理由とシナリオで、Olaresはプライベートで強力かつ安全な主権クラウド体験を提供します
🤖 **エッジAI**: 最先端のオープンAIモデルをローカルで実行し、大規模言語モデル、コンピュータビジョン、音声認識などを含みます。データに合わせてプライベートAIサービスを作成し、機能性とプライバシーを向上させます。<br>
📊 **個人データリポジトリ**: 重要なファイル、写真、ドキュメントを安全に保存し、デバイスや場所を問わず同期および管理します。<br>
🚀 **セルフホストワークスペース**: 安全なオープンソースSaaS代替品を使用して、チームのための無料のコラボレーションワークスペースを構築します。<br>
🎥 **プライベートメディアサーバー**: 個人のメディアコレクションをホストし、独自のストリーミングサービスを提供します。<br>
🏡 **スマートホームハブ**: IoTデバイスやホームオートメーションの中央制御ポイントを作成します。<br>
🤝 **ユーザー所有の分散型ソーシャルメディア**: Mastodon、Ghost、WordPressなどの分散型ソーシャルメディアアプリをOlaresに簡単にインストールし、プラットフォームの手数料やアカウント停止のリスクなしに個人ブランドを構築します。<br>
📚 **学習プラットフォーム**: セルフホスティング、コンテナオーケストレーション、クラウド技術を実践的に学びます。
## はじめに
### システム互換性
Olaresは以下のプラットフォームでテストおよび検証されています
| プラットフォーム | オペレーティングシステム | 備考 |
|---------------------|--------------------------------------|-------------------------------------------------------|
| Linux | Ubuntu 20.04 LTS以降 <br/> Debian 11以降 | |
| Raspberry Pi | RaspbianOS | Raspberry Pi 4 Model BおよびRaspberry Pi 5で検証済み |
| Windows | Windows 11 23H2以降 <br/>Windows 10 22H2以降<br/> WSL2 | |
| Mac | Monterey (12)以降 | |
| Proxmox VE (PVE) | Proxmox Virtual Environment 8.0 | |
> **注意**
>
> 互換性テーブルに記載されていないオペレーティングシステムでOlaresを正常にインストールした場合は、お知らせくださいGitHubリポジトリで[問題を開く](https://github.com/beclab/Olares/issues/new)か、プルリクエストを送信できます。
### Olaresのセットアップ
自分のデバイスでOlaresを始めるには、[はじめにガイド](https://docs.olares.xyz/manual/get-started/)に従ってステップバイステップの手順を確認してください。
## アーキテクチャ
Olaresのアーキテクチャは、次の2つの基本原則に基づいています
- Androidの設計思想を取り入れ、ソフトウェアの権限と対話性を制御することで、システムの安全かつ円滑な運用を実現します。
- クラウドネイティブ技術を活用し、ハードウェアとミドルウェアサービスを効率的に管理します。
![Olaresのアーキテクチ](https://file.bttcdn.com/github/terminus/v2/olares-arch-3.png)
各コンポーネントの詳細については、[Olares アーキテクチャ](https://docs.olares.xyz/manual/system-architecture.html)(英語版)をご参照ください。
## 機能
Olaresは、セキュリティ、使いやすさ、開発の柔軟性を向上させるための幅広い機能を提供します
- **エンタープライズグレードのセキュリティ**: Tailscale、Headscale、Cloudflare Tunnel、FRPを使用してネットワーク構成を簡素化します。
- **安全で許可のないアプリケーションエコシステム**: サンドボックス化によりアプリケーションの分離とセキュリティを確保します。
- **統一ファイルシステムとデータベース**: 自動スケーリング、バックアップ、高可用性を提供します。
- **シングルサインオン**: 一度ログインするだけで、Olares内のすべてのアプリケーションに共有認証サービスを使用してアクセスできます。
- **AI機能**: GPU管理、ローカルAIモデルホスティング、プライベートナレッジベースの包括的なソリューションを提供し、データプライバシーを維持します。
- **内蔵アプリケーション**: ファイルマネージャー、同期ドライブ、ボールト、リーダー、アプリマーケット、設定、ダッシュボードを含みます。
- **どこからでもシームレスにアクセス**: モバイル、デスクトップ、ブラウザ用の専用クライアントを使用して、どこからでもデバイスにアクセスできます。
- **開発ツール**: アプリケーションの開発と移植を容易にする包括的な開発ツールを提供します。
## プロジェクトナビゲーション
Olaresは、GitHubで公開されている多数のコードリポジトリで構成されています。現在のリポジトリは、オペレーティングシステムの最終コンパイル、パッケージング、インストール、およびアップグレードを担当しており、特定の変更は主に対応するリポジトリで行われます。
以下の表は、Olaresのプロジェクトディレクトリと対応するリポジトリを一覧にしたものです。興味のあるものを見つけてください
<details>
<summary><b>フレームワークコンポーネント</b></summary>
| ディレクトリ | リポジトリ | 説明 |
| --- | --- | --- |
| [frameworks/app-service](https://github.com/beclab/olares/tree/main/frameworks/app-service) | <https://github.com/beclab/app-service> | システムフレームワークコンポーネントで、システム内のすべてのアプリのライフサイクル管理とさまざまなセキュリティ制御を提供します。 |
| [frameworks/backup-server](https://github.com/beclab/olares/tree/main/frameworks/backup-server) | <https://github.com/beclab/backup-server> | システムフレームワークコンポーネントで、定期的なフルまたは増分クラスターのバックアップサービスを提供します。 |
| [frameworks/bfl](https://github.com/beclab/olares/tree/main/frameworks/bfl) | <https://github.com/beclab/bfl> | ランチャーのバックエンドBFL、ユーザーアクセスポイントとして機能し、さまざまなバックエンドサービスのインターフェースを集約およびプロキシします。 |
| [frameworks/GPU](https://github.com/beclab/olares/tree/main/frameworks/GPU) | <https://github.com/grgalex/nvshare> | 複数のプロセスまたはKubernetes上で実行されるコンテナが同じ物理GPU上で同時に安全に実行できるようにするGPU共有メカニズムで、各プロセスが全GPUメモリを利用できます。 |
| [frameworks/l4-bfl-proxy](https://github.com/beclab/olares/tree/main/frameworks/l4-bfl-proxy) | <https://github.com/beclab/l4-bfl-proxy> | BFLの第4層ネットワークプロキシ。SNIを事前に読み取ることで、ユーザーのIngressに通過する動的ルートを提供します。 |
| [frameworks/osnode-init](https://github.com/beclab/olares/tree/main/frameworks/osnode-init) | <https://github.com/beclab/osnode-init> | 新しいノードがクラスターに参加する際にノードデータを初期化するシステムフレームワークコンポーネント。 |
| [frameworks/system-server](https://github.com/beclab/olares/tree/main/frameworks/system-server) | <https://github.com/beclab/system-server> | システムランタイムフレームワークの一部として、アプリ間のセキュリティコールのメカニズムを提供します。 |
| [frameworks/tapr](https://github.com/beclab/olares/tree/main/frameworks/tapr) | <https://github.com/beclab/tapr> | Olaresアプリケーションランタイムコンポーネント。 |
</details>
<details>
<summary><b>システムレベルのアプリケーションとサービス</b></summary>
| ディレクトリ | リポジトリ | 説明 |
| --- | --- | --- |
| [apps/analytic](https://github.com/beclab/olares/tree/main/apps/analytic) | <https://github.com/beclab/analytic> | [Umami](https://github.com/umami-software/umami)に基づいて開発されたAnalyticは、Google Analyticsのシンプルで高速、プライバシー重視の代替品です。 |
| [apps/market](https://github.com/beclab/olares/tree/main/apps/market) | <https://github.com/beclab/market> | このリポジトリは、Olaresのアプリケーションマーケットのフロントエンド部分をデプロイします。 |
| [apps/market-server](https://github.com/beclab/olares/tree/main/apps/market-server) | <https://github.com/beclab/market> | このリポジトリは、Olaresのアプリケーションマーケットのバックエンド部分をデプロイします。 |
| [apps/argo](https://github.com/beclab/olares/tree/main/apps/argo) | <https://github.com/argoproj/argo-workflows> | ローカル推奨アルゴリズムのコンテナ実行をオーケストレーションするワークフローエンジン。 |
| [apps/desktop](https://github.com/beclab/olares/tree/main/apps/desktop) | <https://github.com/beclab/desktop> | システムの内蔵デスクトップアプリケーション。 |
| [apps/devbox](https://github.com/beclab/olares/tree/main/apps/devbox) | <https://github.com/beclab/devbox> | Olaresアプリケーションの移植と開発のための開発者向けIDE。 |
| [apps/vault](https://github.com/beclab/olares/tree/main/apps/vault) | <https://github.com/beclab/termipass> | [Padloc](https://github.com/padloc/padloc)に基づいて開発された、あらゆる規模のチームや企業向けの無料の1PasswordおよびBitwardenの代替品。DID、Olares ID、およびOlaresデバイスの管理を支援するクライアントとして機能します。 |
| [apps/files](https://github.com/beclab/olares/tree/main/apps/files) | <https://github.com/beclab/files> | [Filebrowser](https://github.com/filebrowser/filebrowser)から変更された内蔵ファイルマネージャーで、Drive、Sync、およびさまざまなOlares物理ード上のファイルの管理を提供します。 |
| [apps/notifications](https://github.com/beclab/olares/tree/main/apps/notifications) | <https://github.com/beclab/notifications> | Olaresの通知システム |
| [apps/profile](https://github.com/beclab/olares/tree/main/apps/profile) | <https://github.com/beclab/profile> | OlaresのLinktree代替品 |
| [apps/rsshub](https://github.com/beclab/olares/tree/main/apps/rsshub) | <https://github.com/beclab/rsshub> | [RssHub](https://github.com/DIYgod/RSSHub)に基づいたRSS購読管理ツール。 |
| [apps/settings](https://github.com/beclab/olares/tree/main/apps/settings) | <https://github.com/beclab/settings> | 内蔵システム設定。 |
| [apps/system-apps](https://github.com/beclab/olares/tree/main/apps/system-apps) | <https://github.com/beclab/system-apps> | _kubesphere/console_プロジェクトに基づいて構築されたsystem-serviceは、視覚的なダッシュボードと機能豊富なControlHubを通じて、システムの実行状態とリソース使用状況を理解し、制御するためのセルフホストクラウドプラットフォームを提供します。 |
| [apps/wizard](https://github.com/beclab/olares/tree/main/apps/wizard) | <https://github.com/beclab/wizard> | ユーザーにシステムのアクティベーションプロセスを案内するウィザードアプリケーション。 |
</details>
<details>
<summary><b>サードパーティコンポーネントとサービス</b></summary>
| ディレクトリ | リポジトリ | 説明 |
| --- | --- | --- |
| [third-party/authelia](https://github.com/beclab/olares/tree/main/third-party/authelia) | <https://github.com/beclab/authelia> | Webポータルを介してアプリケーションに二要素認証とシングルサインオンSSOを提供するオープンソースの認証および認可サーバー。 |
| [third-party/headscale](https://github.com/beclab/olares/tree/main/third-party/headscale) | <https://github.com/beclab/headscale> | OlaresでのTailscaleコントロールサーバーのオープンソース自ホスト実装で、LarePassで異なるデバイス間でTailscaleを管理します。 |
| [third-party/infisical](https://github.com/beclab/olares/tree/main/third-party/infisical) | <https://github.com/beclab/infisical> | チーム/インフラストラクチャ間でシークレットを同期し、シークレットの漏洩を防ぐオープンソースのシーク<E383BC><E382AF><EFBFBD><E38383>管理プラットフォーム。 |
| [third-party/juicefs](https://github.com/beclab/olares/tree/main/third-party/juicefs) | <https://github.com/beclab/juicefs-ext> | RedisとS3の上に構築された分散POSIXファイルシステムで、異なるード上のアプリがPOSIXインターフェースを介して同じデータにアクセスできるようにします。 |
| [third-party/ks-console](https://github.com/beclab/olares/tree/main/third-party/ks-console) | <https://github.com/kubesphere/console> | Web GUIを介してクラスター管理を可能にするKubesphereコンソール。 |
| [third-party/ks-installer](https://github.com/beclab/olares/tree/main/third-party/ks-installer) | <https://github.com/beclab/ks-installer-ext> | クラスターリソース定義に基づいて自動的にKubesphereクラスターを作成するKubesphereインストーラーコンポーネント。 |
| [third-party/kube-state-metrics](https://github.com/beclab/olares/tree/main/third-party/kube-state-metrics) | <https://github.com/beclab/kube-state-metrics> | kube-state-metricsKSMは、Kubernetes APIサーバーをリッスンし、オブジェクトの状態に関するメトリックを生成するシンプルなサービスです。 |
| [third-party/notification-manager](https://github.com/beclab/olares/tree/main/third-party/notification-manager) | <https://github.com/beclab/notification-manager-ext> | 複数の通知チャネルの統一管理と通知内容のカスタム集約を提供するKubesphereの通知管<E79FA5><E7AEA1>コンポーネント。 |
| [third-party/predixy](https://github.com/beclab/olares/tree/main/third-party/predixy) | <https://github.com/beclab/predixy> | 利用可能なードを自動的に識別し、名前空間の分離を追加するRedisクラスターのプロキシサービス。 |
| [third-party/redis-cluster-operator](https://github.com/beclab/olares/tree/main/third-party/redis-cluster-operator) | <https://github.com/beclab/redis-cluster-operator> | Kubernetesに基づいてRedisクラスターを作成および管理するためのクラウドネイティブツール。 |
| [third-party/seafile-server](https://github.com/beclab/olares/tree/main/third-party/seafile-server) | <https://github.com/beclab/seafile-server> | データストレージを処理するSeafile同期ドライブのバックエンドサービス。 |
| [third-party/seahub](https://github.com/beclab/olares/tree/main/third-party/seahub) | <https://github.com/beclab/seahub> | ファイル共有、データ同期などを処理するSeafile同期ドライブのフロントエンドおよびミドルウェアサービス。 |
| [third-party/tailscale](https://github.com/beclab/olares/tree/main/third-party/tailscale) | <https://github.com/tailscale/tailscale> | TailscaleはすべてのプラットフォームのLarePassに統合されています。 |
</details>
<details>
<summary><b>追加のライブラリとコンポーネント</b></summary>
| ディレクトリ | リポジトリ | 説明 |
| --- | --- | --- |
| [build/installer](https://github.com/beclab/olares/tree/main/build/installer) | | インストーラービルドを生成するためのテンプレート。 |
| [build/manifest](https://github.com/beclab/olares/tree/main/build/manifest) | | インストールビルドイメージリストテンプレート。 |
| [libs/fs-lib](https://github.com/beclab/olares/tree/main/libs) | <https://github.com/beclab/fs-lib> | JuiceFSに基づいて実装されたiNotify互換インターフェースのSDKライブラリ。 |
| [scripts](https://github.com/beclab/olares/tree/main/scripts) | | インストーラービルドを生成するための補助スクリプト。 |
</details>
## Olaresへの貢献
あらゆる形での貢献を歓迎します:
- Olaresで独自のアプリケーションを開発したい場合は、以下を参照してください<br>
https://docs.olares.xyz/developer/develop/
- Olaresの改善に協力したい場合は、以下を参照してください<br>
https://docs.olares.xyz/developer/contribute/olares.html
## コミュニティと連絡先
* [**GitHub Discussion**](https://github.com/beclab/olares/discussions). フィードバックの共有や質問に最適です。
* [**GitHub Issues**](https://github.com/beclab/olares/issues). Olaresの使用中に遭遇したバグの報告や機能提案の提出に最適です。
* [**Discord**](https://discord.com/invite/BzfqrgQPDK). Olaresに関するあらゆることを共有するのに最適です。
## 特別な感謝
Olaresプロジェクトは、次のような多数のサードパーティオープンソースプロジェクトを統合しています[Kubernetes](https://kubernetes.io/)、[Kubesphere](https://github.com/kubesphere/kubesphere)、[Padloc](https://padloc.app/)、[K3S](https://k3s.io/)、[JuiceFS](https://github.com/juicedata/juicefs)、[MinIO](https://github.com/minio/minio)、[Envoy](https://github.com/envoyproxy/envoy)、[Authelia](https://github.com/authelia/authelia)、[Infisical](https://github.com/Infisical/infisical)、[Dify](https://github.com/langgenius/dify)、[Seafile](https://github.com/haiwen/seafile)、[HeadScale](https://headscale.net/)、 [tailscale](https://tailscale.com/)、[Redis Operator](https://github.com/spotahome/redis-operator)、[Nitro](https://nitro.jan.ai/)、[RssHub](http://rsshub.app/)、[predixy](https://github.com/joyieldInc/predixy)、[nvshare](https://github.com/grgalex/nvshare)、[LangChain](https://www.langchain.com/)、[Quasar](https://quasar.dev/)、[TrustWallet](https://trustwallet.com/)、[Restic](https://restic.net/)、[ZincSearch](https://zincsearch-docs.zinc.dev/)、[filebrowser](https://filebrowser.org/)、[lego](https://go-acme.github.io/lego/)、[Velero](https://velero.io/)、[s3rver](https://github.com/jamhall/s3rver)、[Citusdata](https://www.citusdata.com/)。

View File

@@ -29,59 +29,6 @@ spec:
app: recommend
type: ClusterIP
---
{{ if (eq .Values.debugVersion true) }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: recommend
namespace: {{ .Release.Namespace }}
labels:
app: recommend
applications.app.bytetrade.io/author: bytetrade.io
applications.app.bytetrade.io/name: recommend
applications.app.bytetrade.io/owner: '{{ .Values.bfl.username }}'
annotations:
applications.app.bytetrade.io/icon: https://file.bttcdn.com/appstore/recommend/icon.png
applications.app.bytetrade.io/title: recommend
applications.app.bytetrade.io/version: '0.0.1'
applications.app.bytetrade.io/entrances: '[{"name":"recommend", "host":"argoworkflows-ui", "port":80,"title":"recommend"}]'
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: recommend
template:
metadata:
labels:
app: recommend
io.bytetrade.app: "true"
spec:
containers:
- name: recommend-proxy
image: nginx:stable-alpine3.17-slim
imagePullPolicy: IfNotPresent
ports:
- name: proxy
containerPort: 8080
volumeMounts:
- name: nginx-config
readOnly: true
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: nginx-config
configMap:
name: recommend-nginx-configs
items:
- key: nginx.conf
path: nginx.conf
{{ end }}
---

View File

@@ -23,6 +23,7 @@ spec:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
priorityClassName: "system-cluster-critical"
initContainers:
- args:
- -it
@@ -65,7 +66,7 @@ spec:
containers:
- name: edge-desktop
image: beclab/desktop:v0.2.45
image: beclab/desktop:v0.2.55
imagePullPolicy: IfNotPresent
securityContext:
runAsNonRoot: false
@@ -77,7 +78,7 @@ spec:
value: http://bfl.{{ .Release.Namespace }}:8080
- name: desktop-server
image: beclab/desktop-server:v0.2.45
image: beclab/desktop-server:v0.2.55
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
@@ -139,7 +140,7 @@ spec:
fieldRef:
fieldPath: status.podIP
- name: terminus-ws-sidecar
image: 'beclab/ws-gateway:v1.0.3'
image: 'beclab/ws-gateway:v1.0.5'
imagePullPolicy: IfNotPresent
command:
- /ws-gateway
@@ -516,9 +517,11 @@ data:
clusters:
- name: original_dst
connect_timeout: 5000s
connect_timeout: 120s
type: ORIGINAL_DST
lb_policy: CLUSTER_PROVIDED
common_http_protocol_options:
idle_timeout: 10s
- name: authelia
connect_timeout: 2s
type: LOGICAL_DNS
@@ -691,6 +694,8 @@ data:
connect_timeout: 5000s
type: ORIGINAL_DST
lb_policy: CLUSTER_PROVIDED
common_http_protocol_options:
idle_timeout: 10s
- name: ws_original_dst
connect_timeout: 5000s
type: LOGICAL_DNS

Binary file not shown.

Binary file not shown.

View File

@@ -146,7 +146,7 @@ spec:
value: user_space_{{ .Values.bfl.username }}_knowledge
containers:
- name: aria2
image: "beclab/aria2:v0.0.3"
image: "beclab/aria2:v0.0.4"
imagePullPolicy: IfNotPresent
securityContext:
runAsNonRoot: false
@@ -172,7 +172,7 @@ spec:
cpu: "1"
memory: 300Mi
- name: yt-dlp
image: "beclab/yt-dlp:v0.0.16"
image: "beclab/yt-dlp:v0.0.21"
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
@@ -220,7 +220,7 @@ spec:
cpu: "1"
memory: 300Mi
- name: download-spider
image: "beclab/download-spider:v0.0.15"
image: "beclab/download-spider:v0.0.19"
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
@@ -251,6 +251,8 @@ spec:
value: {{ $nat_password | b64dec }}
- name: NATS_SUBJECT
value: "terminus.{{ .Release.Namespace }}.download_status"
- name: SETTING_URL
value: http://system-server.user-system-{{ .Values.bfl.username }}/legacy/v1alpha1/service.settings/v1/api/cookie/retrieve
volumeMounts:
- name: download-dir
mountPath: /downloads

View File

@@ -1,11 +1,12 @@
{{- $namespace := printf "%s" "os-system" -}}
{{- $files_secret := (lookup "v1" "Secret" $namespace "files-secrets") -}}
{{- $password := "" -}}
{{- $files_postgres_password := "" -}}
{{ if $files_secret -}}
{{ $password = (index $files_secret "data" "password") }}
{{ $files_postgres_password = (index $files_secret "data" "files_postgres_password") }}
{{ else -}}
{{ $password = randAlphaNum 16 | b64enc }}
{{ $files_postgres_password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $files_redis_password := "" -}}
@@ -15,6 +16,14 @@
{{ $files_redis_password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $files_nats_secret := (lookup "v1" "Secret" "os-system" "files-nats-secrets") -}}
{{- $files_nats_password := "" -}}
{{ if $files_nats_secret -}}
{{ $files_nats_password = (index $files_nats_secret "data" "files_nats_password") }}
{{ else -}}
{{ $files_nats_password = randAlphaNum 16 | b64enc }}
{{- end -}}
---
apiVersion: apps/v1
kind: Deployment
@@ -33,13 +42,18 @@ spec:
metadata:
labels:
app: files
annotations:
# instrumentation.opentelemetry.io/inject-nginx: "olares-instrumentation"
# instrumentation.opentelemetry.io/inject-nginx-container-names: "nginx"
instrumentation.opentelemetry.io/inject-go: "olares-instrumentation"
instrumentation.opentelemetry.io/go-container-names: "gateway,files,uploader"
instrumentation.opentelemetry.io/otel-go-auto-target-exe: "/filebrowser"
spec:
serviceAccount: os-internal
serviceAccountName: os-internal
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
runAsUser: 0
runAsNonRoot: false
initContainers:
- name: init-data
image: busybox:1.28
@@ -61,16 +75,16 @@ spec:
chown -R 1000:1000 /appdata; chown -R 1000:1000 /appcache; chown -R 1000:1000 /data
containers:
- name: gateway
image: beclab/appdata-gateway:0.1.15
image: beclab/appdata-gateway:0.1.16
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
runAsUser: 0
ports:
- containerPort: 8080
env:
- name: FILES_SERVER_TAG
value: 'beclab/files-server:v0.2.45'
value: 'beclab/files-server:v0.2.61'
- name: NAMESPACE
valueFrom:
fieldRef:
@@ -88,6 +102,10 @@ spec:
value: seafile
image: beclab/media-server:v0.1.10
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: true
runAsUser: 0
privileged: true
ports:
- containerPort: 9090
volumeMounts:
@@ -98,14 +116,15 @@ spec:
{{ if .Values.sharedlib }}
- name: shared-lib
mountPath: /data/External
mountPropagation: Bidirectional
{{ end }}
- name: files
image: beclab/files-server:v0.2.45
image: beclab/files-server:v0.2.61
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: true
runAsUser: 1000
runAsUser: 0
privileged: true
volumeMounts:
- name: fb-data
@@ -157,7 +176,7 @@ spec:
# - name: ZINC_USER
# value: zincuser-files-os-system
# - name: ZINC_PASSWORD
# value: {{ $password | b64dec }}
# value: {{ $files_postgres_password | b64dec }}
# - name: ZINC_HOST
# value: zinc-server-svc.os-system
# - name: ZINC_PORT
@@ -191,6 +210,32 @@ spec:
# use redis db 0 for redis cache
- name: REDIS_DB
value: '0'
- name: NATS_HOST
value: nats
- name: NATS_PORT
value: '4222'
- name: NATS_USERNAME
value: os-system-files-server
- name: NATS_PASSWORD
value: {{ $files_nats_password | b64dec }}
- name: NATS_SUBJECT
value: terminus.os-system.files-notify
- name: RESERVED_SPACE
value: '1000'
- name: OLARES_VERSION
value: '1.12'
- name: FILE_CACHE_DIR
value: '/data/file_cache'
- name: PGHOST
value: citus-headless.os-system
- name: PGPORT
value: '5432'
- name: PGUSER
value: files_os_system
- name: PGPASSWORD
value: {{ $files_postgres_password | b64dec }}
- name: PGDB1
value: os_system_files
- name: POD_NAME
valueFrom:
fieldRef:
@@ -207,12 +252,14 @@ spec:
- /filebrowser
- --noauth
- name: uploader
image: beclab/upload:v1.0.7
image: beclab/upload:v1.0.12
env:
- name: UPLOAD_FILE_TYPE
value: '*'
- name: UPLOAD_LIMITED_SIZE
value: '21474836481'
value: '118111600640'
- name: RESERVED_SPACE
value: '1000'
volumeMounts:
- name: fb-data
mountPath: /appdata
@@ -223,13 +270,18 @@ spec:
{{ if .Values.sharedlib }}
- name: shared-lib
mountPath: /data/External
mountPropagation: Bidirectional
{{ end }}
resources: { }
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: true
runAsUser: 0
privileged: true
- name: nginx
image: 'beclab/nginx-lua:n0.0.4'
image: 'nginx:stable-alpine3.17-slim'
securityContext:
runAsNonRoot: false
runAsUser: 0
@@ -237,6 +289,10 @@ spec:
- containerPort: 80
protocol: TCP
volumeMounts:
- name: files-nginx-config
readOnly: true
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
- name: files-nginx-config
mountPath: /etc/nginx/conf.d/default.conf
subPath: default.conf
@@ -261,6 +317,8 @@ spec:
configMap:
name: files-nginx-config
items:
- key: nginx.conf
path: nginx.conf
- key: default.conf
path: default.conf
defaultMode: 420
@@ -345,10 +403,16 @@ spec:
- sh
- -c
- |
chown -R 1000:1000 /appdata
chown -R 1000:1000 /appdata
- args:
- -it
- nats.os-system:4222
image: owncloudci/wait-for:latest
imagePullPolicy: IfNotPresent
name: check-nats
containers:
- name: files
image: beclab/files-server:v0.2.45
image: beclab/files-server:v0.2.61
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
@@ -361,12 +425,16 @@ spec:
ports:
- containerPort: 8110
env:
- name: FB_DATABASE
value: /appdata/database/filebrowser.db
- name: FB_CONFIG
value: /appdata/config/settings.json
- name: FB_ROOT
- name: ROOT_PREFIX
value: /data
# - name: FB_DATABASE
# value: /appdata/database/filebrowser.db
# - name: FB_CONFIG
# value: /appdata/config/settings.json
# - name: FB_ROOT
# value: /data
- name: OLARES_VERSION
value: '1.12'
- name: NODE_NAME
valueFrom:
fieldRef:
@@ -409,9 +477,39 @@ metadata:
namespace: os-system
type: Opaque
data:
password: {{ $password }}
files_postgres_password: {{ $files_postgres_password }}
files_redis_password: {{ $files_redis_password }}
---
apiVersion: v1
kind: Secret
metadata:
name: files-nats-secrets
namespace: os-system
data:
files_nats_password: {{ $files_nats_password }}
type: Opaque
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: files-pg
namespace: os-system
spec:
app: files
appNamespace: os-system
middleware: postgres
postgreSQL:
user: files_os_system
password:
valueFrom:
secretKeyRef:
key: files_postgres_password
name: files-secrets
databases:
- name: files
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
@@ -430,6 +528,37 @@ spec:
name: files-secrets
namespace: files-redis
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: files-server-nat
namespace: os-system
spec:
app: files-server
appNamespace: os-system
middleware: nats
nats:
password:
valueFrom:
secretKeyRef:
key: files_nats_password
name: files-nats-secrets
refs: []
subjects:
- export:
- appName: files-frontend
pub: allow
sub: allow
- appName: vault
pub: allow
sub: allow
name: files-notify
permission:
pub: allow
sub: allow
user: os-system-files-server
---
kind: ConfigMap
apiVersion: v1
@@ -439,6 +568,37 @@ metadata:
annotations:
kubesphere.io/creator: bytetrade.io
data:
nginx.conf: |-
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 2700;
#gzip on;
client_max_body_size 4000M;
include /etc/nginx/conf.d/*.conf;
}
default.conf: |-
server {
listen 80 default_server;
@@ -488,12 +648,12 @@ data:
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
client_body_timeout 60s;
client_body_timeout 600s;
client_max_body_size 2000M;
proxy_request_buffering off;
keepalive_timeout 75s;
proxy_read_timeout 60s;
proxy_send_timeout 60s;
keepalive_timeout 750s;
proxy_read_timeout 600s;
proxy_send_timeout 600s;
}
location /api/raw/AppData {
@@ -505,12 +665,77 @@ data:
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
client_body_timeout 60s;
client_max_body_size 2000M;
client_body_timeout 1800s;
client_max_body_size 4000M;
proxy_request_buffering off;
keepalive_timeout 75s;
proxy_read_timeout 60s;
proxy_send_timeout 60s;
keepalive_timeout 2700s;
proxy_read_timeout 1800s;
proxy_send_timeout 1800s;
}
location /api/raw {
proxy_pass http://127.0.0.1:8080;
# rewrite ^/server(.*)$ $1 break;
# Add original-request-related headers
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
client_body_timeout 1800s;
client_max_body_size 4000M;
proxy_request_buffering off;
keepalive_timeout 2700s;
proxy_read_timeout 1800s;
proxy_send_timeout 1800s;
}
location /api/md5 {
proxy_pass http://127.0.0.1:8080;
# rewrite ^/server(.*)$ $1 break;
# Add original-request-related headers
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
add_header Accept-Ranges bytes;
client_body_timeout 1800s;
client_max_body_size 4000M;
proxy_request_buffering off;
keepalive_timeout 2700s;
proxy_read_timeout 1800s;
proxy_send_timeout 1800s;
}
location /api/paste {
proxy_pass http://127.0.0.1:8080;
# rewrite ^/server(.*)$ $1 break;
# Add original-request-related headers
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
add_header Accept-Ranges bytes;
client_body_timeout 1800s;
client_max_body_size 4000M;
proxy_request_buffering off;
keepalive_timeout 2700s;
proxy_read_timeout 1800s;
proxy_send_timeout 1800s;
}
location /api/cache {
proxy_pass http://127.0.0.1:8080;
# rewrite ^/server(.*)$ $1 break;
# Add original-request-related headers
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
add_header Accept-Ranges bytes;
client_body_timeout 1800s;
client_max_body_size 4000M;
proxy_request_buffering off;
keepalive_timeout 2700s;
proxy_read_timeout 1800s;
proxy_send_timeout 1800s;
}
location /provider {
@@ -562,7 +787,7 @@ data:
client_body_timeout 600s;
client_max_body_size 4000M;
proxy_request_buffering off;
proxy_request_buffering on;
keepalive_timeout 750s;
proxy_read_timeout 600s;
proxy_send_timeout 600s;
@@ -598,12 +823,12 @@ data:
add_header Accept-Ranges bytes;
client_body_timeout 60s;
client_body_timeout 600s;
client_max_body_size 2000M;
proxy_request_buffering off;
keepalive_timeout 75s;
proxy_read_timeout 60s;
proxy_send_timeout 60s;
keepalive_timeout 750s;
proxy_read_timeout 600s;
proxy_send_timeout 600s;
}
location /seafhttp/ {
@@ -617,12 +842,12 @@ data:
add_header Accept-Ranges bytes;
client_body_timeout 60s;
client_body_timeout 600s;
client_max_body_size 2000M;
proxy_request_buffering off;
keepalive_timeout 75s;
proxy_read_timeout 60s;
proxy_send_timeout 60s;
keepalive_timeout 750s;
proxy_read_timeout 600s;
proxy_send_timeout 600s;
}
# files
# for all routes matching a dot, check for files and return 404 if not found

View File

@@ -27,6 +27,14 @@
{{ $pg_password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $files_frontend_nats_secret := (lookup "v1" "Secret" $namespace "files-frontend-nats-secrets") -}}
{{- $files_frontend_nats_password := "" -}}
{{ if $files_frontend_nats_secret -}}
{{ $files_frontend_nats_password = (index $files_frontend_nats_secret "data" "files_frontend_nats_password") }}
{{ else -}}
{{ $files_frontend_nats_password = randAlphaNum 16 | b64enc }}
{{- end -}}
---
apiVersion: v1
@@ -104,6 +112,11 @@ spec:
labels:
app: files
io.bytetrade.app: "true"
annotations:
# support nginx 1.24.3 1.25.3
# instrumentation.opentelemetry.io/inject-nginx: "olares-instrumentation"
# instrumentation.opentelemetry.io/inject-nginx-container-names: "files-frontend"
# instrumentation.opentelemetry.io/otel-go-auto-target-exe: "drive"
spec:
serviceAccountName: bytetrade-controller
securityContext:
@@ -134,6 +147,12 @@ spec:
image: owncloudci/wait-for:latest
imagePullPolicy: IfNotPresent
name: check-auth
- args:
- -it
- nats.user-system-{{ .Values.bfl.username }}:4222
image: owncloudci/wait-for:latest
imagePullPolicy: IfNotPresent
name: check-nats
- name: terminus-sidecar-init
image: openservicemesh/init:v1.2.3
imagePullPolicy: IfNotPresent
@@ -283,18 +302,29 @@ spec:
# - /filebrowser
# - --noauth
- name: files-frontend
image: beclab/files-frontend:v1.2.69
image: beclab/files-frontend:v1.3.41
imagePullPolicy: IfNotPresent
securityContext:
runAsNonRoot: false
runAsUser: 0
ports:
- containerPort: 80
env:
- name: NATS_HOST
value: nats.user-system-{{ .Values.bfl.username }}
- name: NATS_PORT
value: '4222'
- name: NATS_USERNAME
value: user-system-{{ .Values.bfl.username }}-files-frontend
- name: NATS_PASSWORD
value: {{ $files_frontend_nats_password | b64dec }}
- name: NATS_SUBJECT
value: terminus.os-system.files-notify
volumeMounts:
- name: userspace-dir
mountPath: /data
- name: drive-server
image: beclab/drive:v0.0.29
image: beclab/drive:v0.0.55
imagePullPolicy: IfNotPresent
env:
- name: OS_SYSTEM_SERVER
@@ -314,8 +344,10 @@ spec:
mountPath: /appdata/
- name: userspace-app-dir
mountPath: /data/Application
- name: data-dir
mountPath: /data
- name: task-executor
image: beclab/driveexecutor:v0.0.29
image: beclab/driveexecutor:v0.0.55
imagePullPolicy: IfNotPresent
env:
- name: OS_SYSTEM_SERVER
@@ -335,6 +367,8 @@ spec:
mountPath: /appdata/
- name: userspace-app-dir
mountPath: /data/Application
- name: data-dir
mountPath: /data
# - name: terminus-upload-sidecar
# image: beclab/upload:v1.0.3
# env:
@@ -397,6 +431,10 @@ spec:
fieldPath: status.podIP
volumes:
- name: data-dir
hostPath:
path: {{ .Values.rootPath }}/rootfs/userspace
type: Directory
- name: watch-dir
hostPath:
type: Directory
@@ -606,6 +644,16 @@ data:
redis_password: {{ $redis_password }}
pg_password: {{ $pg_password }}
---
apiVersion: v1
kind: Secret
metadata:
name: files-frontend-nats-secrets
namespace: user-system-{{ .Values.bfl.username }}
data:
files_frontend_nats_password: {{ $files_frontend_nats_password }}
type: Opaque
#---
#apiVersion: apr.bytetrade.io/v1alpha1
#kind: MiddlewareRequest
@@ -646,6 +694,31 @@ spec:
name: zinc-files-secrets
namespace: zinc-files
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: files-frontend-nat
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: files-frontend
appNamespace: user-space-{{ .Values.bfl.username }}
middleware: nats
nats:
password:
valueFrom:
secretKeyRef:
key: files_frontend_nats_password
name: files-frontend-nats-secrets
refs:
- appName: files-server
appNamespace: os-system
subjects:
- name: files-notify
perm:
- pub
- sub
user: user-system-{{ .Values.bfl.username }}-files-frontend
---
apiVersion: v1
@@ -694,7 +767,7 @@ data:
prefix: "/"
route:
cluster: original_dst
timeout: 600s
timeout: 1800s
http_protocol_options:
accept_http_10: true
http_filters:
@@ -781,9 +854,11 @@ data:
clusters:
- name: original_dst
connect_timeout: 5000s
connect_timeout: 120s
type: ORIGINAL_DST
lb_policy: CLUSTER_PROVIDED
common_http_protocol_options:
idle_timeout: 10s
- name: upload_original_dst
connect_timeout: 5000s
type: LOGICAL_DNS

View File

@@ -168,7 +168,7 @@ spec:
value: user_space_{{ .Values.bfl.username }}_knowledge
containers:
- name: knowledge
image: "beclab/knowledge-base-api:v0.1.56"
image: "beclab/knowledge-base-api:v0.1.65"
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
@@ -181,6 +181,8 @@ spec:
value: http://127.0.0.1:8080
- name: RSSHUB_URL
value: 'http://rss-server.os-system:1200'
- name: UPLOAD_SAVE_PATH
value: '/data/Home/Documents/'
- name: SEARCH_URL
value: 'http://search3.os-system:80'
- name: REDIS_PASSWORD
@@ -236,7 +238,7 @@ spec:
memory: 1Gi
- name: backend-server
image: "beclab/recommend-backend:v0.0.24"
image: "beclab/recommend-backend:v0.0.27"
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
@@ -296,7 +298,7 @@ spec:
- name: YT_DLP_API_URL
value: http://download-svc.user-space-{{ .Values.bfl.username }}:3082/api/v1/get_metadata
- name: DOWNLOAD_API_URL
value: http://download-svc.user-space-{{ .Values.bfl.username }}:3080/api/termius/download
value: http://download-svc.user-space-{{ .Values.bfl.username }}:3080/api
- name: SETTING_API_URL
value: http://system-server.user-system-{{ .Values.bfl.username }}/legacy/v1alpha1/service.settings/v1/api/cookie/retrieve
volumeMounts:
@@ -367,7 +369,7 @@ spec:
memory: 800Mi
- name: terminus-ws-sidecar
image: 'beclab/ws-gateway:v1.0.3'
image: 'beclab/ws-gateway:v1.0.4'
imagePullPolicy: IfNotPresent
command:
- /ws-gateway
@@ -380,6 +382,19 @@ spec:
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
- name: recommend-debug
image: "beclab/recommenddebug:v0.0.25"
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
env:
- name: KNOWLEDGE_BASE_API_URL
value: http://127.0.0.1:3010
volumeMounts:
- mountPath: /opt/rank_model
name: model
volumes:
- name: watch-dir
hostPath:
@@ -396,7 +411,10 @@ spec:
items:
- key: envoy.yaml
path: envoy.yaml
- name: model
hostPath:
type: DirectoryOrCreate
path: {{ .Values.userspace.appData }}/rss/model
---
apiVersion: v1
@@ -421,6 +439,10 @@ spec:
protocol: TCP
port: 3010
targetPort: 3010
- name: "knowledge-websocket"
protocol: TCP
port: 40010
targetPort: 40010
---
apiVersion: v1

View File

@@ -3,7 +3,7 @@
{{- $redis_password := "" -}}
{{ if $market_secret -}}
{{ $redis_password = (index $market_secret "data" "redis_password") }}
{{ $redis_password = (index $market_secret "data" "redis-passwords") }}
{{ else -}}
{{ $redis_password = randAlphaNum 16 | b64enc }}
{{- end -}}
@@ -85,12 +85,12 @@ spec:
fieldPath: status.podIP
containers:
- name: appstore
image: beclab/market-frontend:v0.2.30
image: beclab/market-frontend:v0.3.6
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
- name: appstore-backend
image: beclab/market-backend:v0.2.30
image: beclab/market-backend:v0.3.6
imagePullPolicy: IfNotPresent
ports:
- containerPort: 81
@@ -170,7 +170,7 @@ spec:
fieldRef:
fieldPath: status.podIP
- name: terminus-ws-sidecar
image: 'beclab/ws-gateway:v1.0.3'
image: 'beclab/ws-gateway:v1.0.5'
command:
- /ws-gateway
env:

View File

@@ -38,173 +38,6 @@ spec:
databases:
- name: notifications
{{ if (eq .Values.debugVersion true) }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: notifications-deployment
namespace: {{ .Release.Namespace }}
labels:
app: notifications
applications.app.bytetrade.io/author: bytetrade.io
applications.app.bytetrade.io/name: notifications
applications.app.bytetrade.io/owner: '{{ .Values.bfl.username }}'
annotations:
applications.app.bytetrade.io/icon: https://file.bttcdn.com/appstore/notifications/icon.png
applications.app.bytetrade.io/title: Notifications
applications.app.bytetrade.io/version: '0.0.1'
applications.app.bytetrade.io/entrances: '[{"name":"notifications", "host":"notifications-service", "port":80,"title":"Notifications"}]'
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: notifications
template:
metadata:
labels:
app: notifications
io.bytetrade.app: "true"
spec:
initContainers:
- args:
- -it
- authelia-backend.os-system:9091
image: owncloudci/wait-for:latest
imagePullPolicy: IfNotPresent
name: check-auth
- name: terminus-sidecar-init
image: openservicemesh/init:v1.2.3
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
capabilities:
add:
- NET_ADMIN
runAsNonRoot: false
runAsUser: 0
command:
- /bin/sh
- -c
- |
iptables-restore --noflush <<EOF
# sidecar interception rules
*nat
:PROXY_IN_REDIRECT - [0:0]
:PROXY_INBOUND - [0:0]
-A PROXY_IN_REDIRECT -p tcp -j REDIRECT --to-port 15003
-A PROXY_INBOUND -p tcp --dport 15000 -j RETURN
-A PROXY_INBOUND -p tcp -j PROXY_IN_REDIRECT
-A PREROUTING -p tcp -j PROXY_INBOUND
COMMIT
EOF
env:
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
containers:
- name: notifications-frontend
image: beclab/notifications-frontend:v0.1.22
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
- name: terminus-envoy-sidecar
image: bytetrade/envoy:v1.25.11
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
ports:
- name: proxy-admin
containerPort: 15000
- name: proxy-inbound
containerPort: 15003
volumeMounts:
- name: terminus-sidecar-config
readOnly: true
mountPath: /etc/envoy/envoy.yaml
subPath: envoy.yaml
command:
- /usr/local/bin/envoy
- --log-level
- debug
- -c
- /etc/envoy/envoy.yaml
env:
- name: POD_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumes:
- name: terminus-sidecar-config
configMap:
name: sidecar-configs
items:
- key: envoy.yaml
path: envoy.yaml
# - name: REDIS_HOST
# value: localhost
# - name: REDIS_PORT
# value: "6379"
# - name: notifications-worker
# image: aboveos/notifications-worker:v0.1.2
# imagePullPolicy: IfNotPresent
# env:
# - name: MONGO_URL
# value: mongodb://admin:123456@localhost:27017
# - name: REDIS_HOST
# value: localhost
# - name: REDIS_CACHE_SERVICE_HOST
# value: localhost
# - name: REDIS_PORT
# value: "6379"
# - name: mongodb
# image: mongo:4.4.5
# env:
# - name: MONGO_INITDB_ROOT_USERNAME
# value: admin
# - name: MONGO_INITDB_ROOT_PASSWORD
# value: '123456'
# imagePullPolicy: IfNotPresent
# ports:
# - containerPort: 27017
# volumeMounts:
# - name: mongo-data
# mountPath: /data/db
# - name: redis
# image: redis:7.0.5-alpine3.16
# imagePullPolicy: IfNotPresent
# volumeMounts:
# - name: redis-data
# mountPath: /data
# volumes:
# - name: mongo-data
# hostPath:
# type: DirectoryOrCreate
# path: {{ .Values.userspace.appCache}}/notification/db
# - name: redis-data
# hostPath:
# type: DirectoryOrCreate
# path: {{ .Values.userspace.appCache}}/notification/redisdata
{{ end }}
---
apiVersion: apps/v1
@@ -289,17 +122,6 @@ kind: Service
metadata:
name: notifications-service
namespace: {{ .Release.Namespace }}
{{ if (eq .Values.debugVersion true) }}
spec:
type: ClusterIP
selector:
app: notifications
ports:
- name: "notifications-frontend"
protocol: TCP
port: 80
targetPort: 80
{{ else }}
spec:
type: ClusterIP
selector:
@@ -309,7 +131,6 @@ spec:
protocol: TCP
port: 80
targetPort: 3010
{{ end }}
---
apiVersion: v1

View File

@@ -24,7 +24,7 @@ spec:
spec:
containers:
- name: rss-server
image: beclab/rsshub-server:v0.0.2
image: beclab/rsshub-server:v0.0.5
imagePullPolicy: IfNotPresent
ports:
- containerPort: 1200

View File

@@ -199,7 +199,7 @@ spec:
value: os_system_search3
containers:
- name: search3
image: beclab/search3:v0.0.24
image: beclab/search3:v0.0.28
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080

View File

@@ -22,7 +22,7 @@ spec:
spec:
containers:
- name: monitoring-server
image: beclab/monitoring-server-v1:v0.2.3
image: beclab/monitoring-server-v1:v0.2.5
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000

View File

@@ -136,7 +136,13 @@ spec:
labels:
app: system-frontend
io.bytetrade.app: "true"
annotations:
instrumentation.opentelemetry.io/inject-nodejs: "olares-instrumentation"
instrumentation.opentelemetry.io/nodejs-container-names: "settings-server"
# instrumentation.opentelemetry.io/inject-nginx: "olares-instrumentation"
# instrumentation.opentelemetry.io/inject-nginx-container-names: "system-frontend"
spec:
priorityClassName: "system-cluster-critical"
initContainers:
- args:
- -it
@@ -177,7 +183,7 @@ spec:
apiVersion: v1
fieldPath: status.podIP
- name: dashboard-init
image: beclab/dashboard-frontend-v1:v0.4.4
image: beclab/dashboard-frontend-v1:v0.4.9
imagePullPolicy: IfNotPresent
command:
- /bin/sh
@@ -189,7 +195,7 @@ spec:
- mountPath: /www
name: www-dir
- name: control-hub-init
image: beclab/admin-console-frontend-v1:v0.4.8
image: beclab/admin-console-frontend-v1:v0.5.2
imagePullPolicy: IfNotPresent
command:
- /bin/sh
@@ -201,7 +207,7 @@ spec:
- mountPath: /www
name: www-dir
- name: profile-editor-init
image: beclab/profile-editor:v0.2.0
image: beclab/profile-editor:v0.2.1
imagePullPolicy: IfNotPresent
command:
- /bin/sh
@@ -213,7 +219,7 @@ spec:
- mountPath: /www
name: www-dir
- name: profile-preview-init
image: beclab/profile-preview:v0.2.0
image: beclab/profile-preview:v0.2.1
imagePullPolicy: IfNotPresent
command:
- /bin/sh
@@ -225,7 +231,7 @@ spec:
- mountPath: /www
name: www-dir
- name: wise-init
image: beclab/wise:v1.2.69
image: beclab/wise:v1.3.41
imagePullPolicy: IfNotPresent
command:
- /bin/sh
@@ -237,7 +243,7 @@ spec:
- mountPath: /www
name: www-dir
- name: settings-init
image: beclab/settings:v0.2.0
image: beclab/settings:v0.2.11
imagePullPolicy: IfNotPresent
command:
- /bin/sh
@@ -298,7 +304,7 @@ spec:
- name: www-dir
mountPath: /www
- name: wise-download-dir
mountPath: /data/Home/Downloads
mountPath: /data/Home
- name: system-frontend-nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
@@ -338,7 +344,7 @@ spec:
fieldRef:
fieldPath: status.podIP
- name: terminus-ws-sidecar
image: 'beclab/ws-gateway:v1.0.3'
image: 'beclab/ws-gateway:v1.0.4'
imagePullPolicy: IfNotPresent
command:
- /ws-gateway
@@ -351,7 +357,7 @@ spec:
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
- name: settings-server
image: beclab/settings-server:v0.2.0
image: beclab/settings-server:v0.2.12
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
@@ -394,7 +400,7 @@ spec:
path: {{ .Values.userspace.userData }}
- name: terminus-sidecar-config
configMap:
name: sidecar-ws-configs
name: sidecar-configs
items:
- key: envoy.yaml
path: envoy.yaml
@@ -403,7 +409,7 @@ spec:
- name: wise-download-dir
hostPath:
type: Directory
path: {{ .Values.userspace.userData }}/Downloads
path: {{ .Values.userspace.userData }}
- name: system-frontend-nginx-config
configMap:
name: system-frontend-nginx-config
@@ -622,6 +628,11 @@ spec:
- settings-event
op: Create
uri: /api/event/app_installation_event
- filters:
type:
- entrance-state-event
op: Create
uri: /api/event/entrance_state_event
- filters:
type:
- system-upgrade-event
@@ -766,6 +777,14 @@ data:
expires 0;
}
location /ws {
proxy_pass http://127.0.0.1:40010;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
location /bfl {
add_header 'Access-Control-Allow-Headers' 'x-api-nonce,x-api-ts,x-api-ver,x-api-source';
proxy_pass http://bfl;
@@ -779,6 +798,13 @@ data:
location /kapis {
proxy_pass http://SettingsServer;
}
location /api/profile/init {
proxy_pass http://127.0.0.1:3010;
proxy_set_header Host $host;
proxy_set_header X-real-ip $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /api {
proxy_pass http://SettingsServer;
@@ -1048,6 +1074,15 @@ data:
expires 0;
}
location /ws {
proxy_pass http://rss-svc:40010;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
location /knowledge {
proxy_pass http://KnowledgeServer;
@@ -1079,9 +1114,9 @@ data:
proxy_pass http://ArgoworkflowsSever;
}
location ~ ^/download/preview/Downloads/(.*)$
location ~ ^/download/preview/(.*)$
{
alias /data/Home/Downloads/$1;
alias /data/Home/$1;
}
location /videos/ {
@@ -1102,6 +1137,44 @@ data:
proxy_pass http://media-server-service.os-system:9090;
}
location /api {
proxy_pass http://files-service.os-system:80;
# rewrite ^/server(.*)$ $1 break;
# Add original-request-related headers
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
add_header Accept-Ranges bytes;
client_body_timeout 600s;
client_max_body_size 4000M;
proxy_request_buffering off;
keepalive_timeout 750s;
proxy_read_timeout 600s;
proxy_send_timeout 600s;
}
location /upload {
proxy_pass http://files-service.os-system:80;
# rewrite ^/server(.*)$ $1 break;
# Add original-request-related headers
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
add_header Accept-Ranges bytes;
client_body_timeout 600s;
client_max_body_size 4000M;
proxy_request_buffering off;
keepalive_timeout 750s;
proxy_read_timeout 600s;
proxy_send_timeout 600s;
}
# # files
# # for all routes matching a dot, check for files and return 404 if not found
# # e.g. /file.js returns a 404 if not found
@@ -1173,6 +1246,15 @@ data:
expires 0;
}
location /ws {
proxy_pass http://127.0.0.1:40010;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
location /kapis {
proxy_pass http://SettingsServer_Monitoring;
# rewrite ^/server(.*)$ $1 break;

View File

@@ -83,7 +83,7 @@ spec:
value: os_system_vault
containers:
- name: vault-server
image: beclab/vault-server:v1.2.69
image: beclab/vault-server:v1.3.41
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
@@ -114,7 +114,7 @@ spec:
- name: vault-attach
mountPath: /padloc/packages/server/attachments
- name: vault-admin
image: beclab/vault-admin:v1.2.69
image: beclab/vault-admin:v1.3.41
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3010

View File

@@ -1,3 +1,13 @@
{{- $namespace := printf "%s%s" "user-system-" .Values.bfl.username -}}
{{- $vault_nats_secret := (lookup "v1" "Secret" $namespace "vault-nats-secrets") -}}
{{- $vault_nats_password := "" -}}
{{ if $vault_nats_secret -}}
{{ $vault_nats_password = (index $vault_nats_secret "data" "vault_nats_password") }}
{{ else -}}
{{ $vault_nats_password = randAlphaNum 16 | b64enc }}
{{- end -}}
---
@@ -36,6 +46,12 @@ spec:
image: owncloudci/wait-for:latest
imagePullPolicy: IfNotPresent
name: check-auth
- args:
- -it
- nats.user-system-{{ .Values.bfl.username }}:4222
image: owncloudci/wait-for:latest
imagePullPolicy: IfNotPresent
name: check-nats
- name: terminus-sidecar-init
image: openservicemesh/init:v1.2.3
imagePullPolicy: IfNotPresent
@@ -72,13 +88,13 @@ spec:
containers:
- name: vault-frontend
image: beclab/vault-frontend:v1.2.69
image: beclab/vault-frontend:v1.3.41
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
- name: notification-server
image: beclab/vault-notification:v1.2.69
image: beclab/vault-notification:v1.3.41
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3010
@@ -93,6 +109,17 @@ spec:
value: '{{ .Values.os.vault.appSecret }}'
- name: OS_APP_KEY
value: {{ .Values.os.vault.appKey }}
- name: NATS_HOST
value: nats.user-system-{{ .Values.bfl.username }}
- name: NATS_PORT
value: '4222'
- name: NATS_USERNAME
value: user-system-{{ .Values.bfl.username }}-vault
- name: NATS_PASSWORD
value: {{ $vault_nats_password | b64dec }}
- name: NATS_SUBJECT
value: terminus.os-system.files-notify
- name: terminus-envoy-sidecar
image: bytetrade/envoy:v1.25.11
@@ -238,3 +265,38 @@ spec:
version: v1
status:
state: active
---
apiVersion: v1
kind: Secret
metadata:
name: vault-nats-secrets
namespace: user-system-{{ .Values.bfl.username }}
data:
vault_nats_password: {{ $vault_nats_password }}
type: Opaque
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: vault-nat
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: vault
appNamespace: user-space-{{ .Values.bfl.username }}
middleware: nats
nats:
password:
valueFrom:
secretKeyRef:
key: vault_nats_password
name: vault-nats-secrets
refs:
- appName: files-server
appNamespace: os-system
subjects:
- name: files-notify
perm:
- pub
- sub
user: user-system-{{ .Values.bfl.username }}-vault

View File

@@ -61,7 +61,7 @@ spec:
containers:
- name: wizard
image: beclab/wizard:v0.5.11
image: beclab/wizard:v0.5.12
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80

View File

@@ -28,6 +28,8 @@ spec:
spec:
runtimeClassName: nvidia # Explicitly request the runtime
priorityClassName: system-node-critical
nodeSelector:
gpu.bytetrade.io/cuda-supported: 'true'
initContainers:
- name: init-dir
image: busybox:1.28
@@ -40,7 +42,7 @@ spec:
- "[ -d /var/run/nvshare/libnvshare.so ] && rm -rf /var/run/nvshare/libnvshare.so || true"
containers:
- name: nvshare-lib
image: beclab/nvshare:libnvshare-v0.0.2
image: beclab/nvshare:libnvshare-v0.0.1
command:
- sleep
- infinity
@@ -50,7 +52,7 @@ spec:
command:
- "/bin/sh"
- "-c"
- "test -f /host-var-run-nvshare/libnvshare.so || touch /host-var-run-nvshare/libnvshare.so && mount -v --bind /libnvshare.so /host-var-run-nvshare/libnvshare.so"
- "test -f /host-var-run-nvshare/libnvshare.so || ( test -d /host-var-run-nvshare/libnvshare.so && rm -rf /host-var-run-nvshare/libnvshare.so && false ) || touch /host-var-run-nvshare/libnvshare.so && mount -v --bind /libnvshare.so /host-var-run-nvshare/libnvshare.so"
preStop:
exec:
command:

View File

@@ -44,6 +44,8 @@ spec:
# be rescheduled after a failure.
# See https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/
priorityClassName: "system-node-critical"
nodeSelector:
gpu.bytetrade.io/cuda-supported: 'true'
containers:
- image: nvcr.io/nvidia/k8s-device-plugin:v0.16.1
name: nvidia-device-plugin-ctr

View File

@@ -26,8 +26,9 @@ spec:
labels:
name: nvshare-scheduler
spec:
runtimeClassName: nvidia # Explicitly request the runtime
priorityClassName: system-node-critical
nodeSelector:
gpu.bytetrade.io/cuda-supported: 'true'
initContainers:
- name: init-dir
image: busybox:1.28
@@ -46,6 +47,10 @@ spec:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
command:
- sh
- -c
- "test -f /var/run/nvshare/scheduler.sock && rm -rf /var/run/nvshare/scheduler.sock; pid1 nvshare-scheduler"
volumeMounts:
- name: nvshare-socket-directory
mountPath: /var/run/nvshare

View File

@@ -1,6 +1,8 @@
$currentPath = Get-Location
$architecture = $env:PROCESSOR_ARCHITECTURE
$downloadCdnUrlFromEnv = $env:DOWNLOAD_CDN_URL
$version = "#__VERSION__"
$downloadUrl = "https://dc3p1870nn3cj.cloudfront.net"
function Test-Wait {
while ($true) {
@@ -8,42 +10,78 @@ function Test-Wait {
}
}
$runAsAdmin = New-Object Security.Principal.WindowsPrincipal([Security.Principal.WindowsIdentity]::GetCurrent())
if (-not $runAsAdmin.IsInRole([Security.Principal.WindowsBuiltInRole]::Administrator)) {
Write-Host "`n`nThe installation script needs to be run as an administrator.`n"
Write-Host "Please try the following methods:`n"
Write-Host "1. Search for 'PowerShell' in the Start menu, right-click it, and select 'Run as administrator'. "
Write-Host " Navigate to the directory where the installation script is located and run the installation script.`n"
Write-Host "2. Press Win + R, type 'powershell', and then press Ctrl + Shift + Enter. "
Write-Host " Navigate to the directory where the installation script is located and run the installation script.`n"
Write-Host "`nPress Ctrl+C to exit.`n"
Test-Wait
}
$process = Get-Process -Name olares-cli -ErrorAction SilentlyContinue
if ($process) {
Write-Host "olares-cli.exe is running, Press Ctrl+C to exit."
Test-Wait
}
$distro = wsl --list | Select-String -Pattern "^Ubuntu$"
if (-not $distro -eq "") {
Write-Host "Distro Olares exists, please unregister it first."
exit 1
}
$arch = "amd64"
if ($architecture -like "ARM") {
$arch = "arm64"
}
$CLI_VERSION = "0.1.75"
if (-Not $downloadCdnUrlFromEnv -eq "") {
$downloadUrl = $downloadCdnUrlFromEnv
}
$CLI_PROGRAM_PATH = "{0}\" -f $currentPath
if (-Not (Test-Path $CLI_PROGRAM_PATH)) {
New-Item -Path $CLI_PROGRAM_PATH -ItemType Directory
}
$CLI_VERSION = "0.2.17"
$CLI_FILE = "olares-cli-v{0}_windows_{1}.tar.gz" -f $CLI_VERSION, $arch
$CLI_URL = "https://dc3p1870nn3cj.cloudfront.net/{0}" -f $CLI_FILE
$CLI_PATH = "{0}\{1}" -f $currentPath, $CLI_FILE
if (-Not (Test-Path $CLI_FILE)) {
$CLI_URL = "{0}/{1}" -f $downloadUrl, $CLI_FILE
$CLI_PATH = "{0}{1}" -f $CLI_PROGRAM_PATH, $CLI_FILE
$download = 0
if (Test-Path $CLI_PATH) {
tar -xzf $CLI_PATH -C $CLI_PROGRAM_PATH *> $null
if (-Not ($LASTEXITCODE -eq 0)) {
Remove-Item -Path $CLI_PATH
$download = 1
}
} else {
$download = 1
}
if ($download -eq 1) {
curl -Uri $CLI_URL -OutFile $CLI_PATH
Write-Host "Downloading olares-cli.exe..."
if (-Not (Test-Path $CLI_PATH)) {
Write-Host "Download olares-cli.exe failed."
exit 1
}
tar -xzf $CLI_PATH -C $CLI_PROGRAM_PATH *> $null
$cliPath = "{0}\olares-cli.exe" -f $CLI_PROGRAM_PATH
if ( -Not (Test-Path $cliPath)) {
Write-Host "olares-cli.exe not found."
exit 1
}
}
if (-Not (Test-Path $CLI_PATH)) {
Write-Host "Download olares-cli.exe failed."
exit 1
}
tar -xf $CLI_PATH
$cliPath = "{0}\olares-cli.exe" -f $currentPath
if ( -Not (Test-Path $cliPath)) {
Write-Host "olares-cli.exe not found."
exit 1
}
wsl --unregister Ubuntu *> $null
Start-Sleep -Seconds 3
Write-Host ("Preparing to start the installation of Olares {0}. Depending on your network conditions, this process may take several minutes." -f $version)
$command = "{0} olares install --version {1}" -f $cliPath, $version
$command = "{0}\olares-cli.exe olares install --version {1}" -f $CLI_PROGRAM_PATH, $version
Start-Process cmd -ArgumentList '/k',$command -Wait -Verb RunAs

View File

@@ -20,7 +20,7 @@ fi
if [[ "x${VERSION}" == "x" || "x${VERSION:3}" == "xVERSION__" ]]; then
echo "error: Olares version is unspecified, please set the VERSION env var and rerun this script."
echo "for example: VERSION=1.11.0-20241124 bash $0"
echo "for example: VERSION=1.12.0-20241124 bash $0"
exit 1
fi
@@ -28,16 +28,16 @@ fi
os_type=$(uname -s)
os_arch=$(uname -m)
case "$os_arch" in
arm64) ARCH=arm64; ;;
x86_64) ARCH=amd64; ;;
armv7l) ARCH=arm; ;;
aarch64) ARCH=arm64; ;;
ppc64le) ARCH=ppc64le; ;;
s390x) ARCH=s390x; ;;
case "$os_arch" in
arm64) ARCH=arm64; ;;
x86_64) ARCH=amd64; ;;
armv7l) ARCH=arm; ;;
aarch64) ARCH=arm64; ;;
ppc64le) ARCH=ppc64le; ;;
s390x) ARCH=s390x; ;;
*) echo "error: unsupported arch \"$os_arch\"";
exit 1; ;;
esac
esac
# set shell execute command
user="$(id -un 2>/dev/null || true)"
@@ -74,13 +74,14 @@ if [ -z ${cdn_url} ]; then
cdn_url="https://dc3p1870nn3cj.cloudfront.net"
fi
CLI_VERSION="0.1.75"
CLI_VERSION="0.2.18"
CLI_FILE="olares-cli-v${CLI_VERSION}_linux_${ARCH}.tar.gz"
if [[ x"$os_type" == x"Darwin" ]]; then
CLI_FILE="olares-cli-v${CLI_VERSION}_darwin_${ARCH}.tar.gz"
fi
if command_exists olares-cli && [[ "$(olares-cli -v | awk '{print $3}')" == "$CLI_VERSION" ]]; then
INSTALL_OLARES_CLI=$(which olares-cli)
echo "olares-cli already installed and is the expected version"
echo ""
else
@@ -136,16 +137,22 @@ else
echo ""
else
echo "building local release ..."
$sh_c "olares-cli olares release $PARAMS $CDN"
$sh_c "$INSTALL_OLARES_CLI olares release $PARAMS $CDN"
if [[ $? -ne 0 ]]; then
echo "error: failed to build local release"
exit 1
fi
fi
else
echo "running system prechecks ..."
echo ""
$sh_c "$INSTALL_OLARES_CLI olares precheck $PARAMS"
if [[ $? -ne 0 ]]; then
exit 1
fi
echo "downloading installation wizard..."
echo ""
$sh_c "olares-cli olares download wizard $PARAMS $KUBE_PARAM $CDN"
$sh_c "$INSTALL_OLARES_CLI olares download wizard $PARAMS $KUBE_PARAM $CDN"
if [[ $? -ne 0 ]]; then
echo "error: failed to download installation wizard"
exit 1
@@ -154,7 +161,7 @@ else
echo "downloading installation packages..."
echo ""
$sh_c "olares-cli olares download component $PARAMS $KUBE_PARAM $CDN"
$sh_c "$INSTALL_OLARES_CLI olares download component $PARAMS $KUBE_PARAM $CDN"
if [[ $? -ne 0 ]]; then
echo "error: failed to download installation packages"
exit 1
@@ -166,10 +173,7 @@ else
if [ x"$REGISTRY_MIRRORS" != x"" ]; then
extra="--registry-mirrors $REGISTRY_MIRRORS"
fi
if [[ "$JUICEFS" == "1" ]]; then
extra="$extra --with-juicefs=true"
fi
$sh_c "olares-cli olares prepare $PARAMS $KUBE_PARAM $extra"
$sh_c "$INSTALL_OLARES_CLI olares prepare $PARAMS $KUBE_PARAM $extra"
if [[ $? -ne 0 ]]; then
echo "error: failed to prepare installation environment"
exit 1
@@ -185,9 +189,39 @@ if [ "$PREINSTALL" == "1" ]; then
echo "Pre Install mode is specified by the \"PREINSTALL\" env var, skip installing"
exit 0
fi
if [[ "$JUICEFS" == "1" ]]; then
echo "JuiceFS is enabled"
fsflag="--with-juicefs=true"
if [[ "$STORAGE" == "" ]]; then
echo "installing MinIO ..."
else
echo "checking storage config ..."
fi
$sh_c "$INSTALL_OLARES_CLI olares install storage $PARAMS"
if [[ $? -ne 0 ]]; then
exit 1
fi
fi
if [[ -n "$SWAPPINESS" ]]; then
swapflag="$swapflag --swappiness $SWAPPINESS"
fi
if [[ "$ENABLE_POD_SWAP" == "1" ]]; then
swapflag="$swapflag --enable-pod-swap"
fi
if [[ "$ENABLE_ZRAM" == "1" ]]; then
swapflag="$swapflag --enable-zram"
fi
if [[ -n "$ZRAM_SIZE" ]]; then
swapflag="$swapflag --zram-size $ZRAM_SIZE"
fi
if [[ -n "$ZRAM_SWAP_PRIORITY" ]]; then
swapflag="$swapflag --zram-swap-priority $ZRAM_SWAP_PRIORITY"
fi
echo "installing Olares..."
echo ""
$sh_c "olares-cli olares install $PARAMS $KUBE_PARAM"
$sh_c "$INSTALL_OLARES_CLI olares install $PARAMS $KUBE_PARAM $fsflag $swapflag"
if [[ $? -ne 0 ]]; then
echo "error: failed to install Olares"

261
build/installer/joincluster.sh Executable file
View File

@@ -0,0 +1,261 @@
#!/usr/bin/env bash
set -o pipefail
set -e
function command_exists() {
command -v "$@" > /dev/null 2>&1
}
function read_tty() {
echo -n $1
read $2 < /dev/tty
}
function confirm() {
if [[ "$QUIET" == "1" ]]; then
return 0
fi
answer=""
while :; do
read_tty "Do you confirm to continue? (y/n): " answer
if [[ "$answer" != "y" && "$answer" != "n" ]]; then
echo "Please input the letter y or n"
continue
fi
if [[ "$answer" == "y" ]]; then
return 0
fi
if [[ "$answer" == "n" ]]; then
exit 0
fi
done
}
function validate_ip() {
if [[ ! "$1" ]]; then
echo "invalid IP: empty address"
return 1
elif [[ ! $1 =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "invalid IP: illegal format"
return 1
elif [[ $1 =~ ^127 ]]; then
echo "invalid IP: loopback address"
return 1
else
return 0
fi
}
MASTER_SSH_OPTIONS=""
function add_master_host_ssh_options() {
MASTER_SSH_OPTIONS="$MASTER_SSH_OPTIONS --$1 $2"
}
function set_master_host_ssh_options() {
master_host="$MASTER_HOST"
if [[ ! "$master_host" ]]; then
read_tty "Enter the master node's IP: " master_host
fi
while :; do
if ! validate_ip "$master_host"; then
read_tty "Enter the master node's IP: " master_host
else
break
fi
done
add_master_host_ssh_options master-host "$master_host"
if [[ "$MASTER_NODE_NAME" ]]; then
add_master_host_ssh_options master-node-name "$MASTER_NODE_NAME"
fi
if [[ "$MASTER_SSH_USER" ]]; then
add_master_host_ssh_options master-ssh-user "$MASTER_SSH_USER"
else
echo "the environment variable \$MASTER_SSH_USER is not set"
echo "the default remote user \"root\" on the master node will be used to authenticate"
echo "if this is unexpected, please set it explicitly"
confirm
fi
if [[ "$MASTER_SSH_PASSWORD" ]]; then
add_master_host_ssh_options master-ssh-password "$MASTER_SSH_PASSWORD"
fi
if [[ "$MASTER_SSH_PRIVATE_KEY_PATH" ]]; then
add_master_host_ssh_options master-ssh-private-key-path "$MASTER_SSH_PRIVATE_KEY_PATH"
elif [[ ! "$MASTER_SSH_PASSWORD" ]]; then
echo "the environment variable \$MASTER_SSH_PRIVATE_KEY_PATH is not set"
echo "the default key in the local path /root/.ssh/id_rsa will be used to authenticate to the master"
echo "please make sure the key exists and the public key has already been added to the master node"
echo "if this is unexpected, please set it explicitly"
confirm
fi
if [[ "$MASTER_SSH_PORT" ]]; then
add_master_host_ssh_options master-ssh-port "$MASTER_SSH_PORT"
fi
}
function getmasterinfo() {
$sh_c "$INSTALL_OLARES_CLI node masterinfo $MASTER_SSH_OPTIONS" | tee /proc/$$/fd/1
if [[ $? -ne 0 ]]; then
exit 1
fi
echo "" > /proc/$$/fd/1
}
# check os type and arch
os_type=$(uname -s)
os_arch=$(uname -m)
case "$os_arch" in
arm64) ARCH=arm64; ;;
x86_64) ARCH=amd64; ;;
armv7l) ARCH=arm; ;;
aarch64) ARCH=arm64; ;;
ppc64le) ARCH=ppc64le; ;;
s390x) ARCH=s390x; ;;
*) echo "error: unsupported arch \"$os_arch\"";
exit 1; ;;
esac
if [[ "$os_type" != "Linux" ]]; then
echo "error: only Linux machine can be added to the cluster"
exit 1
fi
# set shell execute command
user="$(id -un 2>/dev/null || true)"
sh_c='sh -c'
if [ "$user" != 'root' ]; then
if ! command_exists sudo; then
echo "error: the ability to run as root is needed, but the command \"sudo\" can not be found"
exit 1
fi
sh_c='sudo -E sh -c'
fi
if ! command_exists tar; then
echo "error: the \"tar\" command is needed to unpack installation files, but can not be found"
exit 1
fi
BASE_DIR="$HOME/.olares"
if [ ! -d $BASE_DIR ]; then
mkdir -p $BASE_DIR
fi
cdn_url=${DOWNLOAD_CDN_URL}
if [[ -z "${cdn_url}" ]]; then
cdn_url="https://dc3p1870nn3cj.cloudfront.net"
fi
set_master_host_ssh_options
CLI_VERSION="0.2.17"
CLI_FILE="olares-cli-v${CLI_VERSION}_linux_${ARCH}.tar.gz"
if command_exists olares-cli && [[ "$(olares-cli -v | awk '{print $3}')" == "$CLI_VERSION" ]]; then
INSTALL_OLARES_CLI=$(which olares-cli)
echo "olares-cli already installed and is the expected version"
echo ""
else
if [[ ! -f ${CLI_FILE} ]]; then
CLI_URL="${cdn_url}/${CLI_FILE}"
echo "downloading Olares installer from ${CLI_URL} ..."
echo ""
curl -Lo ${CLI_FILE} ${CLI_URL}
if [[ $? -ne 0 ]]; then
echo "error: failed to download Olares installer"
exit 1
else
echo "Olares installer ${CLI_VERSION} download complete!"
echo ""
fi
fi
INSTALL_OLARES_CLI="/usr/local/bin/olares-cli"
echo "unpacking Olares installer to $INSTALL_OLARES_CLI..."
echo ""
tar -zxf ${CLI_FILE} olares-cli && chmod +x olares-cli
$sh_c "mv olares-cli $INSTALL_OLARES_CLI"
if [[ $? -ne 0 ]]; then
echo "error: failed to unpack Olares installer"
exit 1
fi
fi
echo "getting master info and checking current machine's eligibility to join the cluster"
echo ""
master_olares_version="$( getmasterinfo | grep OlaresVersion | awk '{print $2}' )"
if [[ ! "$master_olares_version" ]]; then
echo "failed to fetch the version of Olares installed on master node"
exit 1
fi
PARAMS="--version $master_olares_version --base-dir $BASE_DIR"
CDN="--download-cdn-url ${cdn_url}"
if [[ -f $BASE_DIR/.prepared ]]; then
echo "file $BASE_DIR/.prepared detected, skip preparing phase"
echo ""
echo "please make sure the prepared Olares version is the same as the master, or there might be compatibility issues"
echo ""
else
echo "running system prechecks ..."
echo ""
$sh_c "$INSTALL_OLARES_CLI olares precheck $PARAMS"
if [[ $? -ne 0 ]]; then
exit 1
fi
echo "downloading installation wizard..."
echo ""
$sh_c "$INSTALL_OLARES_CLI olares download wizard $PARAMS $CDN"
if [[ $? -ne 0 ]]; then
echo "error: failed to download installation wizard"
exit 1
fi
echo "downloading installation packages..."
echo ""
$sh_c "$INSTALL_OLARES_CLI olares download component $PARAMS $CDN"
if [[ $? -ne 0 ]]; then
echo "error: failed to download installation packages"
exit 1
fi
echo "preparing installation environment..."
echo ""
# env 'REGISTRY_MIRRORS' is a docker image cache mirrors, separated by commas
if [ x"$REGISTRY_MIRRORS" != x"" ]; then
extra="--registry-mirrors $REGISTRY_MIRRORS"
fi
$sh_c "$INSTALL_OLARES_CLI olares prepare $PARAMS $extra"
if [[ $? -ne 0 ]]; then
echo "error: failed to prepare installation environment"
exit 1
fi
fi
if [ -f $BASE_DIR/.installed ]; then
echo "file $BASE_DIR/.installed detected, skip installing"
echo "if it is left by an unclean uninstallation, please manually remove it and invoke the installer again"
exit 0
fi
echo "installing Kubernetes and joining Olares cluster..."
echo ""
$sh_c "$INSTALL_OLARES_CLI node add $PARAMS $MASTER_SSH_OPTIONS"
if [[ $? -ne 0 ]]; then
echo "error: failed to install Olares"
exit 1
fi

View File

@@ -482,7 +482,7 @@ function upgrade_terminus(){
# patch
ensure_success $sh_c "${KUBECTL} apply -f ${BASE_DIR}/deploy/patch-globalrole-workspace-manager.yaml"
ensure_success $sh_c "$KUBECTL apply -f ${BASE_DIR}/deploy/patch-notification-manager.yaml"
# ensure_success $sh_c "$KUBECTL apply -f ${BASE_DIR}/deploy/patch-notification-manager.yaml"
# clear apps values.yaml
cat /dev/null > ${BASE_DIR}/wizard/config/apps/values.yaml

View File

@@ -1,2 +1,2 @@
upgrade:
minVersion: 1.11.0-0000000
minVersion: 1.12.0-0000000

View File

@@ -7,14 +7,18 @@ metadata:
iam.kubesphere.io/uninitialized: "true"
helm.sh/resource-policy: keep
bytetrade.io/owner-role: platform-admin
bytetrade.io/terminus-name: {{.Values.user.terminus_name}}
bytetrade.io/terminus-name: "{{.Values.user.terminus_name}}"
bytetrade.io/launcher-auth-policy: two_factor
bytetrade.io/launcher-access-level: "1"
iam.kubesphere.io/sync-to-lldap: "true"
iam.kubesphere.io/synced-to-lldap: "false"
iam.kubesphere.io/user-provider: lldap
iam.kubesphere.io/globalrole: platform-admin
{{ if .Values.nat_gateway_ip }}
bytetrade.io/nat-gateway-ip: {{ .Values.nat_gateway_ip }}
{{ end }}
spec:
email: {{.Values.user.email}}
password: {{.Values.user.password}}
email: "{{.Values.user.email}}"
initialPassword: "{{ .Values.user.password }}"
status:
state: Active

View File

@@ -0,0 +1,18 @@
apiVersion: iam.kubesphere.io/v1alpha2
kind: Sync
metadata:
name: lldap
spec:
lldap:
name: ldap
url: "http://lldap-service.os-system:17170"
userBlacklist:
- admin
- terminus
groupWhitelist:
- lldap_admin
- lldap_regular
credentialsSecret:
kind: Secret
name: lldap-credentials
namespace: os-system

View File

@@ -33,6 +33,7 @@ rules:
resources:
- users
- configmaps
- secrets
verbs:
- get
@@ -61,6 +62,7 @@ rules:
- pods
- users
- configmaps
- secrets
verbs:
- get
- list

View File

@@ -1,18 +0,0 @@
apiVersion: iam.kubesphere.io/v1alpha2
kind: WorkspaceRoleBinding
metadata:
generation: 1
labels:
iam.kubesphere.io/user-ref: '{{.Values.user.name}}'
kubesphere.io/workspace: system-workspace
name: '{{.Values.user.name}}'
roleRef:
apiGroup: iam.kubesphere.io
kind: WorkspaceRole
name: system-workspace-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: '{{.Values.user.name}}'

View File

@@ -1,4 +1,4 @@
olaresd-v0.0.50.tar.gz,pkg/components,https://dc3p1870nn3cj.cloudfront.net/olaresd-v0.0.50-linux-amd64.tar.gz,https://dc3p1870nn3cj.cloudfront.net/olaresd-v0.0.50-linux-arm64.tar.gz,olaresd
olaresd-v0.0.60.tar.gz,pkg/components,https://dc3p1870nn3cj.cloudfront.net/olaresd-v0.0.60-linux-amd64.tar.gz,https://dc3p1870nn3cj.cloudfront.net/olaresd-v0.0.60-linux-arm64.tar.gz,olaresd
socat-1.7.3.2.tar.gz,pkg/components,https://src.fedoraproject.org/lookaside/pkgs/socat/socat-1.7.3.2.tar.gz/sha512/540658b2a3d1b87673196282e5c62b97681bd0f1d1e4759ff9d72909d11060235ee9e9521a973603c1b00376436a9444248e5fbc0ffac65f8edb9c9bc28e7972/socat-1.7.3.2.tar.gz,https://src.fedoraproject.org/lookaside/pkgs/socat/socat-1.7.3.2.tar.gz/sha512/540658b2a3d1b87673196282e5c62b97681bd0f1d1e4759ff9d72909d11060235ee9e9521a973603c1b00376436a9444248e5fbc0ffac65f8edb9c9bc28e7972/socat-1.7.3.2.tar.gz,socat
conntrack-tools-1.4.1.tar.gz,pkg/components,https://github.com/fqrouter/conntrack-tools/archive/refs/tags/conntrack-tools-1.4.1.tar.gz,https://github.com/fqrouter/conntrack-tools/archive/refs/tags/conntrack-tools-1.4.1.tar.gz,conntrack-tools
minio.RELEASE.2023-05-04T21-44-30Z,pkg/components,https://dl.min.io/server/minio/release/linux-amd64/archive/minio.RELEASE.2023-05-04T21-44-30Z,https://dl.min.io/server/minio/release/linux-arm64/archive/minio.RELEASE.2023-05-04T21-44-30Z,minio
@@ -14,8 +14,11 @@ ubuntu2204_cuda-keyring_1.1-1_all.deb,pkg/components,https://developer.download.
ubuntu2204_cuda-keyring_1.0-1_all.deb,pkg/components,https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb,https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-keyring_1.0-1_all.deb,ubuntu-22.04_cuda-keyring_1.0-1
ubuntu2004_cuda-keyring_1.1-1_all.deb,pkg/components,https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.1-1_all.deb,https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/arm64/cuda-keyring_1.1-1_all.deb,ubuntu-20.04_cuda-keyring_1.1-1
ubuntu2004_cuda-keyring_1.0-1_all.deb,pkg/components,https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb,https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/arm64/cuda-keyring_1.0-1_all.deb,ubuntu-20.04_cuda-keyring_1.0-1
debian12_cuda-keyring_1.1-1_all.deb,pkg/components,https://developer.download.nvidia.com/compute/cuda/repos/debian12/x86_64/cuda-keyring_1.1-1_all.deb,https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-keyring_1.1-1_all.deb,debian-12_cuda-keyring_1.1-1
debian11_cuda-keyring_1.1-1_all.deb,pkg/components,https://developer.download.nvidia.com/compute/cuda/repos/debian11/x86_64/cuda-keyring_1.1-1_all.deb,https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-keyring_1.1-1_all.deb,debian-11_cuda-keyring_1.1-1
gpgkey,pkg/components,https://nvidia.github.io/libnvidia-container/gpgkey,https://nvidia.github.io/libnvidia-container/gpgkey,gpgkey
ubuntu_22.04_libnvidia-container.list,pkg/components,https://nvidia.github.io/libnvidia-container/ubuntu22.04/libnvidia-container.list,https://nvidia.github.io/libnvidia-container/ubuntu22.04/libnvidia-container.list,ubuntu_22.04_libnvidia-container.list
ubuntu_20.04_libnvidia-container.list,pkg/components,https://nvidia.github.io/libnvidia-container/ubuntu20.04/libnvidia-container.list,https://nvidia.github.io/libnvidia-container/ubuntu20.04/libnvidia-container.list,ubuntu_20.04_libnvidia-container.list
libnvidia-gpgkey,pkg/components,https://nvidia.github.io/libnvidia-container/gpgkey,https://nvidia.github.io/libnvidia-container/gpgkey,libnvidia-gpgkey
libnvidia-container.list,pkg/components,https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list,https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list,libnvidia-container.list
restic-linux-0.17.3,pkg/components,https://github.com/restic/restic/releases/download/v0.17.3/restic_0.17.3_linux_amd64.bz2,https://github.com/restic/restic/releases/download/v0.17.3/restic_0.17.3_linux_arm64.bz2,restic
restic-darwin-0.17.3,pkg/components,https://github.com/restic/restic/releases/download/v0.17.3/restic_0.17.3_darwin_amd64.bz2,https://github.com/restic/restic/releases/download/v0.17.3/restic_0.17.3_darwin_arm64.bz2,restic

View File

@@ -1,53 +0,0 @@
[components] format: url,filename
https://github.com/beclab/Installer/releases/download/0.1.13/terminus-cli-v0.1.13_linux_amd64.tar.gz,terminus-cli-v0.1.13_linux_amd64.tar.gz
https://src.fedoraproject.org/lookaside/pkgs/socat/socat-1.7.3.2.tar.gz/sha512/540658b2a3d1b87673196282e5c62b97681bd0f1d1e4759ff9d72909d11060235ee9e9521a973603c1b00376436a9444248e5fbc0ffac65f8edb9c9bc28e7972/socat-1.7.3.2.tar.gz,socat-1.7.3.2.tar.gz
https://github.com/fqrouter/conntrack-tools/archive/refs/tags/conntrack-tools-1.4.1.tar.gz,conntrack-tools-1.4.1.tar.gz
https://dl.min.io/server/minio/release/linux-amd64/archive/minio.RELEASE.2023-05-04T21-44-30Z,minio.RELEASE.2023-05-04T21-44-30Z
https://github.com/beclab/minio-operator/releases/download/v0.0.1/minio-operator-v0.0.1-linux-amd64.tar.gz,minio-operator-v0.0.1-linux-amd64.tar.gz
https://download.redis.io/releases/redis-5.0.14.tar.gz,redis-5.0.14.tar.gz
https://github.com/beclab/juicefs-ext/releases/download/v11.1.1/juicefs-v11.1.1-linux-amd64.tar.gz,juicefs-v11.1.1-linux-amd64.tar.gz
https://github.com/beclab/velero/releases/download/v1.11.3/velero-v1.11.3-linux-amd64.tar.gz,velero-v1.11.3-linux-amd64.tar.gz
https://launchpad.net/ubuntu/+source/apparmor/4.0.1-0ubuntu1/+build/28428840/+files/apparmor_4.0.1-0ubuntu1_amd64.deb,apparmor_4.0.1-0ubuntu1_amd64.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/cuda-keyring_1.1-1_all.deb,ubuntu_24.04_cuda-keyring_1.1-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/cuda-keyring_1.1-1_all.deb,ubuntu2404_cuda-keyring_1.1-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb,ubuntu_22.04_cuda-keyring_1.1-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb,ubuntu_22.04_cuda-keyring_1.0-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb,ubuntu2204_cuda-keyring_1.0-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.1-1_all.deb,ubuntu_20.04_cuda-keyring_1.1-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb,ubuntu_20.04_cuda-keyring_1.0-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb,ubuntu2004_cuda-keyring_1.0-1_all.deb
https://nvidia.github.io/libnvidia-container/gpgkey,gpgkey
https://nvidia.github.io/libnvidia-container/ubuntu22.04/libnvidia-container.list,ubuntu_22.04_libnvidia-container.list
https://nvidia.github.io/libnvidia-container/ubuntu20.04/libnvidia-container.list,ubuntu_20.04_libnvidia-container.list
[pkg] format: url,path,filename,special,cpname
https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz,cni/v0.9.1,,,
https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz,cni/v1.1.1,,,
https://github.com/containerd/containerd/releases/download/v1.6.4/containerd-1.6.4-linux-amd64.tar.gz,containerd/1.6.4,,,
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.0/crictl-v1.24.0-linux-amd64.tar.gz,crictl/v1.24.0,,,
https://github.com/coreos/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz,etcd/v3.4.13,,,
https://get.helm.sh/helm-v3.9.0-linux-amd64.tar.gz,helm/v3.9.0,,helm,helm-v3.9.0
https://github.com/k3s-io/k3s/releases/download/v1.21.5+k3s1/k3s,kube/v1.21.5,,,k3s-v1.21.5
https://storage.googleapis.com/kubernetes-release/release/v1.22.10/bin/linux/amd64/kubeadm,kube/v1.22.10,,kubeadm,kubeadm-v1.22.10
https://storage.googleapis.com/kubernetes-release/release/v1.22.10/bin/linux/amd64/kubelet,kube/v1.22.10,,kubelet,kubelet-v1.22.10
https://storage.googleapis.com/kubernetes-release/release/v1.22.10/bin/linux/amd64/kubectl,kube/v1.22.10,,kubectl,kubectl-v1.22.10
https://github.com/opencontainers/runc/releases/download/v1.1.1/runc.amd64,runc/v1.1.1,,,runc-v1.1.1
https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64,runc/v1.1.4,,,runc-v1.1.4

View File

@@ -1,53 +0,0 @@
[components] format: url,filename
https://github.com/beclab/Installer/releases/download/0.1.13/terminus-cli-v0.1.13_linux_amd64.tar.gz,terminus-cli-v0.1.13_linux_amd64.tar.gz
https://src.fedoraproject.org/lookaside/pkgs/socat/socat-1.7.3.2.tar.gz/sha512/540658b2a3d1b87673196282e5c62b97681bd0f1d1e4759ff9d72909d11060235ee9e9521a973603c1b00376436a9444248e5fbc0ffac65f8edb9c9bc28e7972/socat-1.7.3.2.tar.gz,socat-1.7.3.2.tar.gz
https://github.com/fqrouter/conntrack-tools/archive/refs/tags/conntrack-tools-1.4.1.tar.gz,conntrack-tools-1.4.1.tar.gz
https://dl.min.io/server/minio/release/linux-arm64/archive/minio.RELEASE.2023-05-04T21-44-30Z,
https://github.com/beclab/minio-operator/releases/download/v0.0.1/minio-operator-v0.0.1-linux-arm64.tar.gz,minio-operator-v0.0.1-linux-arm64.tar.gz
https://download.redis.io/releases/redis-5.0.14.tar.gz,redis-5.0.14.tar.gz
https://github.com/beclab/juicefs-ext/releases/download/v11.1.1/juicefs-v11.1.1-linux-arm64.tar.gz,juicefs-v11.1.1-linux-arm64.tar.gz
https://github.com/beclab/velero/releases/download/v1.11.3/velero-v1.11.3-linux-arm64.tar.gz,velero-v1.11.3-linux-arm64.tar.gz
https://launchpad.net/ubuntu/+source/apparmor/4.0.1-0ubuntu1/+build/28428841/+files/apparmor_4.0.1-0ubuntu1_arm64.deb,apparmor_4.0.1-0ubuntu1_arm64.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/arm64/cuda-keyring_1.1-1_all.deb,ubuntu_24.04_cuda-keyring_1.1-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/arm64/cuda-keyring_1.1-1_all.deb,ubuntu2404_cuda-keyring_1.1-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-keyring_1.1-1_all.deb,ubuntu_22.04_cuda-keyring_1.1-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-keyring_1.0-1_all.deb,ubuntu_22.04_cuda-keyring_1.0-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-keyring_1.0-1_all.deb,ubuntu2204_cuda-keyring_1.0-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/arm64/cuda-keyring_1.1-1_all.deb,ubuntu_20.04_cuda-keyring_1.1-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/arm64/cuda-keyring_1.0-1_all.deb,ubuntu_20.04_cuda-keyring_1.0-1_all.deb
https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/arm64/cuda-keyring_1.0-1_all.deb,ubuntu2004_cuda-keyring_1.0-1_all.deb
https://nvidia.github.io/libnvidia-container/gpgkey,gpgkey
https://nvidia.github.io/libnvidia-container/ubuntu22.04/libnvidia-container.list,ubuntu_22.04_libnvidia-container.list
https://nvidia.github.io/libnvidia-container/ubuntu20.04/libnvidia-container.list,ubuntu_20.04_libnvidia-container.list
[pkg] format: url,path,filename,special,cpname
https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-arm64-v0.9.1.tgz,cni/v0.9.1,,
https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz,cni/v1.1.1,,
https://github.com/containerd/containerd/releases/download/v1.6.4/containerd-1.6.4-linux-arm64.tar.gz,containerd/1.6.4,,
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.0/crictl-v1.24.0-linux-arm64.tar.gz,crictl/v1.24.0,,
https://github.com/coreos/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-arm64.tar.gz,etcd/v3.4.13,,
https://get.helm.sh/helm-v3.9.0-linux-arm64.tar.gz,helm/v3.9.0,,helm,helm-v3.9.0
https://github.com/k3s-io/k3s/releases/download/v1.21.5+k3s1/k3s-arm64,kube/v1.21.5,,,k3s-v1.21.5
https://storage.googleapis.com/kubernetes-release/release/v1.22.10/bin/linux/arm64/kubeadm,kube/v1.22.10,,kubeadm,kubeadm-v1.22.10
https://storage.googleapis.com/kubernetes-release/release/v1.22.10/bin/linux/arm64/kubelet,kube/v1.22.10,,kubelet,kubelet-v1.22.10
https://storage.googleapis.com/kubernetes-release/release/v1.22.10/bin/linux/arm64/kubectl,kube/v1.22.10,,kubectl,kubectl-v1.22.10
https://github.com/opencontainers/runc/releases/download/v1.1.1/runc.arm64,runc/v1.1.1,,,runc-v1.1.1
https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.arm64,runc/v1.1.4,,,runc-v1.1.4

View File

@@ -1,50 +1,23 @@
beclab/ks-apiserver:v3.3.0-ext-3
beclab/kube-state-metrics:v2.3.0-ext
beclab/notification-manager-ext:v0.1.1-ext
beclab/notification-manager-operator-ext:v0.1.0-ext
beclab/notification-tenant-sidecar:v0.1.0
calico/cni:v3.23.2
calico/cni:v3.27.3
calico/kube-controllers:v3.23.2
calico/kube-controllers:v3.27.3
calico/node:v3.23.2
calico/node:v3.27.3
calico/pod2daemon-flexvol:v3.23.2
beclab/ks-apiserver:0.0.5
beclab/ks-controller-manager:0.0.5
beclab/kube-state-metrics:v2.3.0-ext.1
calico/cni:v3.29.2
calico/kube-controllers:v3.29.2
calico/node:v3.29.2
beclab/citus:12.2
csiplugin/snapshot-controller:v4.0.0
beclab/ks-installer-ext:v0.1.9-ext
kubesphere/k8s-dns-node-cache:1.15.12
kubesphere/ks-console:v3.3.0
kubesphere/ks-controller-manager:v3.3.0
kubesphere/kube-apiserver:v1.22.10
kubesphere/kube-apiserver:v1.21.4
kubesphere/kube-controller-manager:v1.22.10
kubesphere/kube-controller-manager:v1.21.4
kubesphere/kubectl:v1.22.0
kubesphere/kube-proxy:v1.22.10
kubesphere/kube-proxy:v1.21.4
kubesphere/kube-rbac-proxy:v0.12.0
kubesphere/kube-rbac-proxy:v0.8.0
kubesphere/kube-scheduler:v1.22.10
kubesphere/kube-scheduler:v1.21.4
kubesphere/pause:3.5
kubesphere/pause:3.4.1
k8s.gcr.io/pause:3.5
k8s.gcr.io/pause:3.6
k8s.gcr.io/kube-scheduler:v1.22.10
k8s.gcr.io/kube-proxy:v1.22.10
k8s.gcr.io/kube-controller-manager:v1.22.10
k8s.gcr.io/kube-apiserver:v1.22.10
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4
registry.k8s.io/pause:3.5
bitnami/kube-rbac-proxy:0.19.0
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/pause:3.10
kubesphere/prometheus-config-reloader:v0.55.1
kubesphere/prometheus-operator:v0.55.1
mirrorgooglecontainers/defaultbackend-amd64:1.4
openebs/linux-utils:3.3.0
openebs/provisioner-localpv:3.3.0
beclab/percona-server-mongodb-operator:1.15.2
prom/alertmanager:v0.23.0
prom/node-exporter:v1.3.1
prom/prometheus:v2.34.0
quay.io/argoproj/argocli:v3.5.0
@@ -57,15 +30,12 @@ beclab/l4-bfl-proxy:v0.2.7
gcr.io/k8s-minikube/storage-provisioner:v5
owncloudci/wait-for:latest
beclab/recommend-argotask:v0.0.12
nvcr.io/nvidia/k8s-device-plugin:v0.16.1
beclab/nvshare:libnvshare-v0.0.2
bytetrade/nvshare:nvshare-device-plugin
bytetrade/nvshare:nvshare-scheduler
beclab/nats-server-config-reloader:v1
beclab/cloudflared:v0.1.0
rancher/mirrored-library-busybox:1.34.1
rancher/mirrored-library-traefik:2.6.2
rancher/mirrored-metrics-server:v0.5.2
rancher/mirrored-pause:3.6
beclab/reverse-proxy:v0.1.4
beclab/upgrade-job:0.1.5
beclab/upgrade-job:0.1.7
bytetrade/envoy:v1.25.11.1
liangjw/kube-webhook-certgen:v1.1.1
beclab/hami:v2.5.0
alpine:3.14
mirrorgooglecontainers/defaultbackend-amd64:1.4

View File

@@ -1,7 +1,8 @@
kubesphere/pause:3.5
calico/cni:v3.23.2
calico/node:v3.23.2
kubesphere/kube-rbac-proxy:v0.11.0
registry.k8s.io/pause:3.10
calico/cni:v3.29.2
calico/kube-controllers:v3.29.2
calico/node:v3.29.2
bitnami/kube-rbac-proxy:0.19.0
prom/node-exporter:v1.3.1
beclab/image-service:0.2.12
beclab/osnode-init:v0.0.10

View File

@@ -1,12 +1,10 @@
cni-plugins-v0.9.1.tgz,pkg/cni/v0.9.1,https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz,https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-arm64-v0.9.1.tgz,cni-plugins-k3s
cni-plugins-v1.1.1.tgz,pkg/cni/v1.1.1,https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz,https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz,cni-plugins-k8s
cni-plugins-v1.6.2.tgz,pkg/cni/v1.6.2,https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz,https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-arm-v1.6.2.tgz,cni-plugins
containerd-1.6.4.tar.gz,pkg/containerd/1.6.4,https://github.com/containerd/containerd/releases/download/v1.6.4/containerd-1.6.4-linux-amd64.tar.gz,https://github.com/containerd/containerd/releases/download/v1.6.4/containerd-1.6.4-linux-arm64.tar.gz,containerd
crictl-v1.24.0-linux-amd64.tar.gz,pkg/crictl/v1.24.0,https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.0/crictl-v1.24.0-linux-amd64.tar.gz,https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.0/crictl-v1.24.0-linux-arm64.tar.gz,crictl
etcd-v3.4.13.tar.gz,pkg/etcd/v3.4.13,https://github.com/coreos/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz,https://github.com/coreos/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-arm64.tar.gz,etcd
helm-v3.9.0.tar.gz,pkg/helm/v3.9.0,https://get.helm.sh/helm-v3.9.0-linux-amd64.tar.gz,https://get.helm.sh/helm-v3.9.0-linux-arm64.tar.gz,helm
k3s,pkg/kube/v1.21.5,https://github.com/k3s-io/k3s/releases/download/v1.21.5+k3s1/k3s,https://github.com/k3s-io/k3s/releases/download/v1.21.5+k3s1/k3s-arm64,k3s
kubeadm,pkg/kube/v1.22.10,https://storage.googleapis.com/kubernetes-release/release/v1.22.10/bin/linux/amd64/kubeadm,https://storage.googleapis.com/kubernetes-release/release/v1.22.10/bin/linux/arm64/kubeadm,kubeadm
kubelet,pkg/kube/v1.22.10,https://storage.googleapis.com/kubernetes-release/release/v1.22.10/bin/linux/amd64/kubelet,https://storage.googleapis.com/kubernetes-release/release/v1.22.10/bin/linux/arm64/kubelet,kubelet
kubectl,pkg/kube/v1.22.10,https://storage.googleapis.com/kubernetes-release/release/v1.22.10/bin/linux/amd64/kubectl,https://storage.googleapis.com/kubernetes-release/release/v1.22.10/bin/linux/arm64/kubectl,kubectl
runc,pkg/runc/v1.1.1,https://github.com/opencontainers/runc/releases/download/v1.1.1/runc.amd64,https://github.com/opencontainers/runc/releases/download/v1.1.1/runc.arm64,runc-k3s
runc,pkg/runc/v1.1.4,https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64,https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.arm64,runc-k8s
crictl-v1.32.0.tar.gz,pkg/crictl/v1.32.0,https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.32.0/crictl-v1.32.0-linux-amd64.tar.gz,https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.32.0/crictl-v1.32.0-linux-arm64.tar.gz,crictl
etcd-v3.5.18.tar.gz,pkg/etcd/v3.5.18,https://github.com/coreos/etcd/releases/download/v3.5.18/etcd-v3.5.18-linux-amd64.tar.gz,https://github.com/coreos/etcd/releases/download/v3.5.18/etcd-v3.5.18-linux-arm64.tar.gz,etcd
helm-v3.9.0.tar.gz,pkg/helm/v3.9.0,https://get.helm.sh/helm-v3.17.1-linux-amd64.tar.gz,https://get.helm.sh/helm-v3.17.1-linux-arm.tar.gz,helm
k3s-v1.32.2,pkg/kube/v1.32.2,https://github.com/k3s-io/k3s/releases/download/v1.32.2+k3s1/k3s,https://github.com/k3s-io/k3s/releases/download/v1.32.2+k3s1/k3s-arm64,k3s
kubeadm-v1.32.2,pkg/kube/v1.32.2,https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm,https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubeadm,kubeadm
kubelet-v1.32.2,pkg/kube/v1.32.2,https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet,https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubelet,kubelet
kubectl-v1.32.2,pkg/kube/v1.32.2,https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl,https://dl.k8s.io/release/v1.32.2/bin/linux/arm64/kubectl,kubectl
runc-v1.2.5,pkg/runc/v1.2.5,https://github.com/opencontainers/runc/releases/download/v1.2.5/runc.amd64,https://github.com/opencontainers/runc/releases/download/v1.2.5/runc.arm64,runc

View File

@@ -0,0 +1,16 @@
apiVersion: v2
name: hami
version: 2.5.0
kubeVersion: ">= 1.16.0"
description: Heterogeneous AI Computing Virtualization Middleware
keywords:
- vgpu
- gpu
type: application
maintainers:
- name: limengxuan
email: limengxuan@4paradigm.com
- name: zhangxiao
email: xiaozhang0210@hotmail.com
appVersion: "2.5.0"

View File

@@ -0,0 +1,3 @@
** Please be patient while the chart is being deployed **
Resource name: {{ .Values.resourceName }}

View File

@@ -0,0 +1,108 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "hami-vgpu.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "hami-vgpu.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
The app name for Scheduler
*/}}
{{- define "hami-vgpu.scheduler" -}}
{{- printf "%s-scheduler" ( include "hami-vgpu.fullname" . ) | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
The app name for DevicePlugin
*/}}
{{- define "hami-vgpu.device-plugin" -}}
{{- printf "%s-device-plugin" ( include "hami-vgpu.fullname" . ) | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
The tls secret name for Scheduler
*/}}
{{- define "hami-vgpu.scheduler.tls" -}}
{{- printf "%s-scheduler-tls" ( include "hami-vgpu.fullname" . ) | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
The webhook name
*/}}
{{- define "hami-vgpu.scheduler.webhook" -}}
{{- printf "%s-webhook" ( include "hami-vgpu.fullname" . ) | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "hami-vgpu.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "hami-vgpu.labels" -}}
helm.sh/chart: {{ include "hami-vgpu.chart" . }}
{{ include "hami-vgpu.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "hami-vgpu.selectorLabels" -}}
app.kubernetes.io/name: {{ include "hami-vgpu.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Image registry secret name
*/}}
{{- define "hami-vgpu.imagePullSecrets" -}}
imagePullSecrets: {{ toYaml .Values.imagePullSecrets | nindent 2 }}
{{- end }}
{{/*
Resolve the tag for kubeScheduler.
*/}}
{{- define "resolvedKubeSchedulerTag" -}}
{{- if .Values.scheduler.kubeScheduler.imageTag }}
{{- .Values.scheduler.kubeScheduler.imageTag | trim -}}
{{- else }}
{{- include "strippedKubeVersion" . | trim -}}
{{- end }}
{{- end }}
{{/*
Return the stripped Kubernetes version string by removing extra parts after semantic version number.
v1.31.1+k3s1 -> v1.31.1
v1.30.8-eks-2d5f260 -> v1.30.8
v1.31.1 -> v1.31.1
*/}}
{{- define "strippedKubeVersion" -}}
{{ regexReplaceAll "^(v[0-9]+\\.[0-9]+\\.[0-9]+)(.*)$" .Capabilities.KubeVersion.Version "$1" }}
{{- end -}}

View File

@@ -0,0 +1,24 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "hami-vgpu.device-plugin" . }}
labels:
app.kubernetes.io/component: hami-device-plugin
{{- include "hami-vgpu.labels" . | nindent 4 }}
data:
config.json: |
{
"nodeconfig": [
{
"name": "m5-cloudinfra-online02",
"operatingmode": "hami-core",
"devicememoryscaling": 1.8,
"devicesplitcount": 10,
"migstrategy":"none",
"filterdevices": {
"uuid": [],
"index": []
}
}
]
}

View File

@@ -0,0 +1,170 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ include "hami-vgpu.device-plugin" . }}
labels:
app.kubernetes.io/component: hami-device-plugin
{{- include "hami-vgpu.labels" . | nindent 4 }}
{{- with .Values.global.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- if .Values.global.annotations }}
annotations: {{ toYaml .Values.global.annotations | nindent 4}}
{{- end }}
spec:
updateStrategy:
{{- with .Values.devicePlugin.updateStrategy }}
{{- toYaml . | nindent 4 }}
{{- end }}
selector:
matchLabels:
app.kubernetes.io/component: hami-device-plugin
{{- include "hami-vgpu.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
app.kubernetes.io/component: hami-device-plugin
hami.io/webhook: ignore
{{- include "hami-vgpu.selectorLabels" . | nindent 8 }}
{{- if .Values.devicePlugin.podAnnotations }}
annotations: {{ toYaml .Values.devicePlugin.podAnnotations | nindent 8 }}
{{- end }}
spec:
{{- if .Values.devicePlugin.runtimeClassName }}
runtimeClassName: {{ .Values.devicePlugin.runtimeClassName }}
{{- end }}
{{- include "hami-vgpu.imagePullSecrets" . | nindent 6}}
serviceAccountName: {{ include "hami-vgpu.device-plugin" . }}
priorityClassName: system-node-critical
hostPID: true
hostNetwork: true
containers:
- name: device-plugin
image: {{ .Values.devicePlugin.image }}:{{ .Values.version }}
imagePullPolicy: {{ .Values.devicePlugin.imagePullPolicy | quote }}
lifecycle:
postStart:
exec:
command: ["/bin/sh","-c", {{ printf "/k8s-vgpu/bin/vgpu-init.sh %s/vgpu/" .Values.global.gpuHookPath | quote }}]
command:
- nvidia-device-plugin
- --config-file=/device-config.yaml
- --mig-strategy={{ .Values.devicePlugin.migStrategy }}
- --disable-core-limit={{ .Values.devicePlugin.disablecorelimit }}
{{- range .Values.devicePlugin.extraArgs }}
- {{ . }}
{{- end }}
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: NVIDIA_MIG_MONITOR_DEVICES
value: all
- name: HOOK_PATH
value: {{ .Values.global.gpuHookPath }}
{{- if typeIs "bool" .Values.devicePlugin.passDeviceSpecsEnabled }}
- name: PASS_DEVICE_SPECS
value: {{ .Values.devicePlugin.passDeviceSpecsEnabled | quote }}
{{- end }}
securityContext:
privileged: true
allowPrivilegeEscalation: true
capabilities:
drop: ["ALL"]
add: ["SYS_ADMIN"]
resources:
{{- toYaml .Values.devicePlugin.resources | nindent 12 }}
volumeMounts:
- name: device-plugin
mountPath: /var/lib/kubelet/device-plugins
- name: lib
mountPath: {{ printf "%s%s" .Values.global.gpuHookPath "/vgpu" }}
- name: usrbin
mountPath: /usrbin
- name: deviceconfig
mountPath: /config
- name: hosttmp
mountPath: /tmp
- name: device-config
mountPath: /device-config.yaml
subPath: device-config.yaml
- name: vgpu-monitor
image: {{ .Values.devicePlugin.image }}:{{ .Values.version }}
imagePullPolicy: {{ .Values.devicePlugin.imagePullPolicy | quote }}
command:
- "vGPUmonitor"
{{- range .Values.devicePlugin.extraArgs }}
- {{ . }}
{{- end }}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
add: ["SYS_ADMIN"]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: NVIDIA_VISIBLE_DEVICES
value: "all"
- name: NVIDIA_MIG_MONITOR_DEVICES
value: all
- name: HOOK_PATH
value: {{ .Values.global.gpuHookPath }}/vgpu
resources:
{{- toYaml .Values.devicePlugin.vgpuMonitor.resources | nindent 12 }}
volumeMounts:
- name: ctrs
mountPath: {{ .Values.devicePlugin.monitorctrPath }}
- name: dockers
mountPath: /run/docker
- name: containerds
mountPath: /run/containerd
- name: sysinfo
mountPath: /sysinfo
- name: hostvar
mountPath: /hostvar
- name: hosttmp
mountPath: /tmp
volumes:
- name: ctrs
hostPath:
path: {{ .Values.devicePlugin.monitorctrPath }}
- name: hosttmp
hostPath:
path: /tmp
- name: dockers
hostPath:
path: /run/docker
- name: containerds
hostPath:
path: /run/containerd
- name: device-plugin
hostPath:
path: {{ .Values.devicePlugin.pluginPath }}
- name: lib
hostPath:
path: {{ .Values.devicePlugin.libPath }}
- name: usrbin
hostPath:
path: /usr/bin
- name: sysinfo
hostPath:
path: /sys
- name: hostvar
hostPath:
path: /var
- name: deviceconfig
configMap:
name: {{ template "hami-vgpu.device-plugin" . }}
- name: device-config
configMap:
name: {{ include "hami-vgpu.scheduler" . }}-device
{{- if .Values.devicePlugin.nvidianodeSelector }}
nodeSelector: {{ toYaml .Values.devicePlugin.nvidianodeSelector | nindent 8 }}
{{- end }}
{{- if .Values.devicePlugin.tolerations }}
tolerations: {{ toYaml .Values.devicePlugin.tolerations | nindent 8 }}
{{- end }}

View File

@@ -0,0 +1,27 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "hami-vgpu.device-plugin" . }}-monitor
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- create
- watch
- list
- update
- patch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- update
- list
- patch

View File

@@ -0,0 +1,16 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "hami-vgpu.device-plugin" . }}
labels:
app.kubernetes.io/component: "hami-device-plugin"
{{- include "hami-vgpu.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
#name: cluster-admin
name: {{ include "hami-vgpu.device-plugin" . }}-monitor
subjects:
- kind: ServiceAccount
name: {{ include "hami-vgpu.device-plugin" . }}
namespace: {{ .Release.Namespace | quote }}

View File

@@ -0,0 +1,26 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "hami-vgpu.device-plugin" . }}-monitor
labels:
app.kubernetes.io/component: hami-device-plugin
{{- include "hami-vgpu.labels" . | nindent 4 }}
{{- if .Values.devicePlugin.service.labels }} # Use devicePlugin instead of scheduler
{{ toYaml .Values.devicePlugin.service.labels | indent 4 }}
{{- end }}
{{- if .Values.devicePlugin.service.annotations }} # Use devicePlugin instead of scheduler
annotations: {{ toYaml .Values.devicePlugin.service.annotations | nindent 4 }}
{{- end }}
spec:
type: {{ .Values.devicePlugin.service.type | default "NodePort" }} # Default type is NodePort
ports:
- name: monitorport
port: {{ .Values.devicePlugin.service.httpPort | default 31992 }} # Default HTTP port is 31992
targetPort: 9394
{{- if eq (.Values.devicePlugin.service.type | default "NodePort") "NodePort" }} # If type is NodePort, set nodePort
nodePort: {{ .Values.devicePlugin.service.httpPort | default 31992 }}
{{- end }}
protocol: TCP
selector:
app.kubernetes.io/component: hami-device-plugin
{{- include "hami-vgpu.selectorLabels" . | nindent 4 }}

View File

@@ -0,0 +1,8 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "hami-vgpu.device-plugin" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
app.kubernetes.io/component: "hami-device-plugin"
{{- include "hami-vgpu.labels" . | nindent 4 }}

View File

@@ -0,0 +1,100 @@
{{- if .Values.scheduler.kubeScheduler.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "hami-vgpu.scheduler" . }}
labels:
app.kubernetes.io/component: hami-scheduler
{{- include "hami-vgpu.labels" . | nindent 4 }}
data:
config.json: |
{
"kind": "Policy",
"apiVersion": "v1",
"extenders": [
{
"urlPrefix": "https://127.0.0.1:443",
"filterVerb": "filter",
"bindVerb": "bind",
"enableHttps": true,
"weight": 1,
"nodeCacheCapable": true,
"httpTimeout": 30000000000,
"tlsConfig": {
"insecure": true
},
"managedResources": [
{{- if .Values.devices.ascend.enabled }}
{{- range .Values.devices.ascend.customresources }}
{
"name": "{{ . }}",
"ignoredByScheduler": true
},
{{- end }}
{{- end }}
{{- if .Values.devices.mthreads.enabled }}
{{- range .Values.devices.mthreads.customresources }}
{
"name": "{{ . }}",
"ignoredByScheduler": true
},
{{- end }}
{{- end }}
{
"name": "{{ .Values.resourceName }}",
"ignoredByScheduler": true
},
{
"name": "{{ .Values.resourceMem }}",
"ignoredByScheduler": true
},
{
"name": "{{ .Values.resourceCores }}",
"ignoredByScheduler": true
},
{
"name": "{{ .Values.resourceMemPercentage }}",
"ignoredByScheduler": true
},
{
"name": "{{ .Values.resourcePriority }}",
"ignoredByScheduler": true
},
{
"name": "{{ .Values.mluResourceName }}",
"ignoredByScheduler": true
},
{
"name": "{{ .Values.dcuResourceName }}",
"ignoredByScheduler": true
},
{
"name": "{{ .Values.dcuResourceMem }}",
"ignoredByScheduler": true
},
{
"name": "{{ .Values.dcuResourceCores }}",
"ignoredByScheduler": true
},
{
"name": "{{ .Values.iluvatarResourceName }}",
"ignoredByScheduler": true
},
{
"name": "{{ .Values.metaxResourceName }}",
"ignoredByScheduler": true
},
{
"name": "{{ .Values.metaxResourceCore }}",
"ignoredByScheduler": true
},
{
"name": "{{ .Values.metaxResourceMem }}",
"ignoredByScheduler": true
}
],
"ignoreable": false
}
]
}
{{- end }}

View File

@@ -0,0 +1,70 @@
{{- if .Values.scheduler.kubeScheduler.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "hami-vgpu.scheduler" . }}-newversion
labels:
app.kubernetes.io/component: hami-scheduler
{{- include "hami-vgpu.labels" . | nindent 4 }}
data:
config.yaml: |
{{- if gt (regexReplaceAll "[^0-9]" .Capabilities.KubeVersion.Minor "" | int) 25}}
apiVersion: kubescheduler.config.k8s.io/v1
{{- else }}
apiVersion: kubescheduler.config.k8s.io/v1beta2
{{- end }}
kind: KubeSchedulerConfiguration
leaderElection:
leaderElect: false
profiles:
- schedulerName: {{ .Values.schedulerName }}
extenders:
- urlPrefix: "https://127.0.0.1:443"
filterVerb: filter
bindVerb: bind
nodeCacheCapable: true
weight: 1
httpTimeout: 30s
enableHTTPS: true
tlsConfig:
insecure: true
managedResources:
- name: {{ .Values.resourceName }}
ignoredByScheduler: true
- name: {{ .Values.resourceMem }}
ignoredByScheduler: true
- name: {{ .Values.resourceCores }}
ignoredByScheduler: true
- name: {{ .Values.resourceMemPercentage }}
ignoredByScheduler: true
- name: {{ .Values.resourcePriority }}
ignoredByScheduler: true
- name: {{ .Values.mluResourceName }}
ignoredByScheduler: true
- name: {{ .Values.dcuResourceName }}
ignoredByScheduler: true
- name: {{ .Values.dcuResourceMem }}
ignoredByScheduler: true
- name: {{ .Values.dcuResourceCores }}
ignoredByScheduler: true
- name: {{ .Values.iluvatarResourceName }}
ignoredByScheduler: true
- name: {{ .Values.metaxResourceName }}
ignoredByScheduler: true
- name: {{ .Values.metaxResourceCore }}
ignoredByScheduler: true
- name: {{ .Values.metaxResourceMem }}
ignoredByScheduler: true
{{- if .Values.devices.ascend.enabled }}
{{- range .Values.devices.ascend.customresources }}
- name: {{ . }}
ignoredByScheduler: true
{{- end }}
{{- end }}
{{- if .Values.devices.mthreads.enabled }}
{{- range .Values.devices.mthreads.customresources }}
- name: {{ . }}
ignoredByScheduler: true
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,156 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "hami-vgpu.scheduler" . }}
labels:
app.kubernetes.io/component: hami-scheduler
{{- include "hami-vgpu.labels" . | nindent 4 }}
{{- with .Values.global.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- if .Values.global.annotations }}
annotations: {{ toYaml .Values.global.annotations | nindent 4}}
{{- end }}
spec:
{{- if .Values.scheduler.leaderElect }}
replicas: {{ .Values.scheduler.replicas }}
{{- else }}
replicas: 1
{{- end }}
selector:
matchLabels:
app.kubernetes.io/component: hami-scheduler
{{- include "hami-vgpu.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
app.kubernetes.io/component: hami-scheduler
{{- include "hami-vgpu.selectorLabels" . | nindent 8 }}
hami.io/webhook: ignore
{{- if .Values.scheduler.podAnnotations }}
annotations: {{ toYaml .Values.scheduler.podAnnotations | nindent 8 }}
{{- end }}
spec:
{{- include "hami-vgpu.imagePullSecrets" . | nindent 6}}
serviceAccountName: {{ include "hami-vgpu.scheduler" . }}
priorityClassName: system-node-critical
containers:
{{- if .Values.scheduler.kubeScheduler.enabled }}
- name: kube-scheduler
image: "{{ .Values.scheduler.kubeScheduler.image }}:{{ include "resolvedKubeSchedulerTag" . }}"
imagePullPolicy: {{ .Values.scheduler.kubeScheduler.imagePullPolicy | quote }}
command:
- kube-scheduler
{{- if ge (regexReplaceAll "[^0-9]" .Capabilities.KubeVersion.Minor "" | int) 22 }}
{{- range .Values.scheduler.kubeScheduler.extraNewArgs }}
- {{ . }}
{{- end }}
{{- else }}
- --scheduler-name={{ .Values.schedulerName }}
{{- range .Values.scheduler.kubeScheduler.extraArgs }}
- {{ . }}
{{- end }}
{{- end }}
- --leader-elect={{ .Values.scheduler.leaderElect }}
- --leader-elect-resource-name={{ .Values.schedulerName }}
- --leader-elect-resource-namespace={{ .Release.Namespace }}
resources:
{{- toYaml .Values.scheduler.kubeScheduler.resources | nindent 12 }}
volumeMounts:
- name: scheduler-config
mountPath: /config
{{- end }}
{{- if .Values.scheduler.livenessProbe }}
livenessProbe:
failureThreshold: 8
httpGet:
path: /healthz
port: 10259
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 15
{{- end }}
- name: vgpu-scheduler-extender
image: {{ .Values.scheduler.extender.image }}:{{ .Values.version }}
imagePullPolicy: {{ .Values.scheduler.extender.imagePullPolicy | quote }}
env:
{{- if .Values.global.managedNodeSelectorEnable }}
{{- range $key, $value := .Values.global.managedNodeSelector }}
- name: NODE_SELECTOR_{{ $key | upper | replace "-" "_" }}
value: "{{ $value }}"
{{- end }}
{{- end }}
command:
- scheduler
- --http_bind=0.0.0.0:443
- --cert_file=/tls/tls.crt
- --key_file=/tls/tls.key
- --scheduler-name={{ .Values.schedulerName }}
- --metrics-bind-address={{ .Values.scheduler.metricsBindAddress }}
- --node-scheduler-policy={{ .Values.scheduler.defaultSchedulerPolicy.nodeSchedulerPolicy }}
- --gpu-scheduler-policy={{ .Values.scheduler.defaultSchedulerPolicy.gpuSchedulerPolicy }}
- --device-config-file=/device-config.yaml
{{- if .Values.devices.ascend.enabled }}
- --enable-ascend=true
{{- end }}
{{- if .Values.scheduler.nodeLabelSelector }}
- --node-label-selector={{- $first := true -}}
{{- range $key, $value := .Values.scheduler.nodeLabelSelector -}}
{{- if not $first }},{{ end -}}
{{- $key }}={{ $value -}}
{{- $first = false -}}
{{- end -}}
{{- end }}
{{- range .Values.scheduler.extender.extraArgs }}
- {{ . }}
{{- end }}
ports:
- name: http
containerPort: 443
protocol: TCP
resources:
{{- toYaml .Values.scheduler.extender.resources | nindent 12 }}
volumeMounts:
- name: tls-config
mountPath: /tls
- name: device-config
mountPath: /device-config.yaml
subPath: device-config.yaml
{{- if .Values.scheduler.livenessProbe }}
livenessProbe:
httpGet:
path: /healthz
port: 443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 3
timeoutSeconds: 5
{{- end }}
volumes:
- name: tls-config
secret:
secretName: {{ template "hami-vgpu.scheduler.tls" . }}
{{- if .Values.scheduler.kubeScheduler.enabled }}
- name: scheduler-config
configMap:
{{- if ge (regexReplaceAll "[^0-9]" .Capabilities.KubeVersion.Minor "" | int) 22 }}
name: {{ template "hami-vgpu.scheduler" . }}-newversion
{{- else }}
name: {{ template "hami-vgpu.scheduler" . }}
{{- end }}
{{- end }}
- name: device-config
configMap:
name: {{ include "hami-vgpu.scheduler" . }}-device
{{- if .Values.scheduler.nodeSelector }}
nodeSelector: {{ toYaml .Values.scheduler.nodeSelector | nindent 8 }}
{{- end }}
{{- if .Values.scheduler.tolerations }}
tolerations: {{ toYaml .Values.scheduler.tolerations | nindent 8 }}
{{- end }}
{{- if .Values.scheduler.nodeName }}
nodeName: {{ .Values.scheduler.nodeName }}
{{- end }}

View File

@@ -0,0 +1,203 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "hami-vgpu.scheduler" . }}-device
labels:
app.kubernetes.io/component: hami-scheduler
{{- include "hami-vgpu.labels" . | nindent 4 }}
data:
device-config.yaml: |-
{{- if .Files.Glob "files/device-config.yaml" }}
{{- .Files.Get "files/device-config.yaml" | nindent 4}}
{{- else }}
nvidia:
resourceCountName: {{ .Values.resourceName }}
resourceMemoryName: {{ .Values.resourceMem }}
resourceMemoryPercentageName: {{ .Values.resourceMemPercentage }}
resourceCoreName: {{ .Values.resourceCores }}
resourcePriorityName: {{ .Values.resourcePriority }}
overwriteEnv: false
defaultMemory: 16000
defaultCores: 0
defaultGPUNum: 1
deviceSplitCount: {{ .Values.devicePlugin.deviceSplitCount }}
deviceMemoryScaling: {{ .Values.devicePlugin.deviceMemoryScaling }}
deviceCoreScaling: {{ .Values.devicePlugin.deviceCoreScaling }}
gpuCorePolicy: {{ .Values.devices.nvidia.gpuCorePolicy }}
knownMigGeometries:
- models: [ "A30" ]
allowedGeometries:
-
- name: 1g.6gb
memory: 6144
count: 4
-
- name: 2g.12gb
memory: 12288
count: 2
-
- name: 4g.24gb
memory: 24576
count: 1
- models: [ "A100-SXM4-40GB", "A100-40GB-PCIe", "A100-PCIE-40GB", "A100-SXM4-40GB" ]
allowedGeometries:
-
- name: 1g.5gb
memory: 5120
count: 7
-
- name: 2g.10gb
memory: 10240
count: 3
- name: 1g.5gb
memory: 5120
count: 1
-
- name: 3g.20gb
memory: 20480
count: 2
-
- name: 7g.40gb
memory: 40960
count: 1
- models: [ "A100-SXM4-80GB", "A100-80GB-PCIe", "A100-PCIE-80GB"]
allowedGeometries:
-
- name: 1g.10gb
memory: 10240
count: 7
-
- name: 2g.20gb
memory: 20480
count: 3
- name: 1g.10gb
memory: 10240
count: 1
-
- name: 3g.40gb
memory: 40960
count: 2
-
- name: 7g.79gb
memory: 80896
count: 1
cambricon:
resourceCountName: {{ .Values.mluResourceName }}
resourceMemoryName: {{ .Values.mluResourceMem }}
resourceCoreName: {{ .Values.mluResourceCores }}
hygon:
resourceCountName: {{ .Values.dcuResourceName }}
resourceMemoryName: {{ .Values.dcuResourceMem }}
resourceCoreName: {{ .Values.dcuResourceCores }}
metax:
resourceCountName: "metax-tech.com/gpu"
resourceVCountName: {{ .Values.metaxResourceName }}
resourceVMemoryName: {{ .Values.metaxResourceMem }}
resourceVCoreName: {{ .Values.metaxResourceCore }}
mthreads:
resourceCountName: "mthreads.com/vgpu"
resourceMemoryName: "mthreads.com/sgpu-memory"
resourceCoreName: "mthreads.com/sgpu-core"
iluvatar:
resourceCountName: {{ .Values.iluvatarResourceName }}
resourceMemoryName: {{ .Values.iluvatarResourceMem }}
resourceCoreName: {{ .Values.iluvatarResourceCore }}
vnpus:
- chipName: 910B
commonWord: Ascend910A
resourceName: huawei.com/Ascend910A
resourceMemoryName: huawei.com/Ascend910A-memory
memoryAllocatable: 32768
memoryCapacity: 32768
aiCore: 30
templates:
- name: vir02
memory: 2184
aiCore: 2
- name: vir04
memory: 4369
aiCore: 4
- name: vir08
memory: 8738
aiCore: 8
- name: vir16
memory: 17476
aiCore: 16
- chipName: 910B2
commonWord: Ascend910B2
resourceName: huawei.com/Ascend910B2
resourceMemoryName: huawei.com/Ascend910B2-memory
memoryAllocatable: 65536
memoryCapacity: 65536
aiCore: 24
aiCPU: 6
templates:
- name: vir03_1c_8g
memory: 8192
aiCore: 3
aiCPU: 1
- name: vir06_1c_16g
memory: 16384
aiCore: 6
aiCPU: 1
- name: vir12_3c_32g
memory: 32768
aiCore: 12
aiCPU: 3
- chipName: 910B3
commonWord: Ascend910B
resourceName: huawei.com/Ascend910B
resourceMemoryName: huawei.com/Ascend910B-memory
memoryAllocatable: 65536
memoryCapacity: 65536
aiCore: 20
aiCPU: 7
templates:
- name: vir05_1c_16g
memory: 16384
aiCore: 5
aiCPU: 1
- name: vir10_3c_32g
memory: 32768
aiCore: 10
aiCPU: 3
- chipName: 910B4
commonWord: Ascend910B4
resourceName: huawei.com/Ascend910B4
resourceMemoryName: huawei.com/Ascend910B4-memory
memoryAllocatable: 32768
memoryCapacity: 32768
aiCore: 20
aiCPU: 7
templates:
- name: vir05_1c_8g
memory: 8192
aiCore: 5
aiCPU: 1
- name: vir10_3c_16g
memory: 16384
aiCore: 10
aiCPU: 3
- chipName: 310P3
commonWord: Ascend310P
resourceName: huawei.com/Ascend310P
resourceMemoryName: huawei.com/Ascend310P-memory
memoryAllocatable: 21527
memoryCapacity: 24576
aiCore: 8
aiCPU: 7
templates:
- name: vir01
memory: 3072
aiCore: 1
aiCPU: 1
- name: vir02
memory: 6144
aiCore: 2
aiCPU: 2
- name: vir04
memory: 12288
aiCore: 4
aiCPU: 4
{{ end }}

View File

@@ -0,0 +1,26 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "hami-vgpu.fullname" . }}-admission
annotations:
"helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
labels:
{{- include "hami-vgpu.labels" . | nindent 4 }}
app.kubernetes.io/component: admission-webhook
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
#- validatingwebhookconfigurations
- mutatingwebhookconfigurations
verbs:
- get
- update
{{- if .Values.podSecurityPolicy.enabled }}
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- {{ include "hami-vgpu.fullname" . }}-admission
{{- end }}

View File

@@ -0,0 +1,18 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "hami-vgpu.fullname" . }}-admission
annotations:
"helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
labels:
{{- include "hami-vgpu.labels" . | nindent 4 }}
app.kubernetes.io/component: admission-webhook
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "hami-vgpu.fullname" . }}-admission
subjects:
- kind: ServiceAccount
name: {{ include "hami-vgpu.fullname" . }}-admission
namespace: {{ .Release.Namespace | quote }}

View File

@@ -0,0 +1,60 @@
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "hami-vgpu.fullname" . }}-admission-create
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
labels:
{{- include "hami-vgpu.labels" . | nindent 4 }}
app.kubernetes.io/component: admission-webhook
spec:
{{- if .Capabilities.APIVersions.Has "batch/v1alpha1" }}
# Alpha feature since k8s 1.12
ttlSecondsAfterFinished: 0
{{- end }}
template:
metadata:
name: {{ include "hami-vgpu.fullname" . }}-admission-create
{{- if .Values.scheduler.patch.podAnnotations }}
annotations: {{ toYaml .Values.scheduler.patch.podAnnotations | nindent 8 }}
{{- end }}
labels:
{{- include "hami-vgpu.labels" . | nindent 8 }}
app.kubernetes.io/component: admission-webhook
hami.io/webhook: ignore
spec:
{{- include "hami-vgpu.imagePullSecrets" . | nindent 6}}
{{- if .Values.scheduler.patch.priorityClassName }}
priorityClassName: {{ .Values.scheduler.patch.priorityClassName }}
{{- end }}
containers:
- name: create
{{- if ge (regexReplaceAll "[^0-9]" .Capabilities.KubeVersion.Minor "" | int) 22 }}
image: {{ .Values.scheduler.patch.imageNew }}
{{- else }}
image: {{ .Values.scheduler.patch.image }}
{{- end }}
imagePullPolicy: {{ .Values.scheduler.patch.imagePullPolicy }}
args:
- create
- --cert-name=tls.crt
- --key-name=tls.key
{{- if .Values.scheduler.admissionWebhook.customURL.enabled }}
- --host={{ printf "%s.%s.svc,127.0.0.1,%s" (include "hami-vgpu.scheduler" .) .Release.Namespace .Values.scheduler.admissionWebhook.customURL.host}}
{{- else }}
- --host={{ printf "%s.%s.svc,127.0.0.1" (include "hami-vgpu.scheduler" .) .Release.Namespace }}
{{- end }}
- --namespace={{ .Release.Namespace }}
- --secret-name={{ include "hami-vgpu.scheduler.tls" . }}
restartPolicy: OnFailure
serviceAccountName: {{ include "hami-vgpu.fullname" . }}-admission
{{- if .Values.scheduler.patch.nodeSelector }}
nodeSelector: {{ toYaml .Values.scheduler.patch.nodeSelector | nindent 8 }}
{{- end }}
{{- if .Values.scheduler.patch.tolerations }}
tolerations: {{ toYaml .Values.scheduler.patch.tolerations | nindent 8 }}
{{- end }}
securityContext:
runAsNonRoot: true
runAsUser: {{ .Values.scheduler.patch.runAsUser }}

View File

@@ -0,0 +1,55 @@
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "hami-vgpu.fullname" . }}-admission-patch
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
labels:
{{- include "hami-vgpu.labels" . | nindent 4 }}
app.kubernetes.io/component: admission-webhook
spec:
{{- if .Capabilities.APIVersions.Has "batch/v1alpha1" }}
# Alpha feature since k8s 1.12
ttlSecondsAfterFinished: 0
{{- end }}
template:
metadata:
name: {{ include "hami-vgpu.fullname" . }}-admission-patch
{{- if .Values.scheduler.patch.podAnnotations }}
annotations: {{ toYaml .Values.scheduler.patch.podAnnotations | nindent 8 }}
{{- end }}
labels:
{{- include "hami-vgpu.labels" . | nindent 8 }}
app.kubernetes.io/component: admission-webhook
hami.io/webhook: ignore
spec:
{{- include "hami-vgpu.imagePullSecrets" . | nindent 6}}
{{- if .Values.scheduler.patch.priorityClassName }}
priorityClassName: {{ .Values.scheduler.patch.priorityClassName }}
{{- end }}
containers:
- name: patch
{{- if ge (regexReplaceAll "[^0-9]" .Capabilities.KubeVersion.Minor "" | int) 22 }}
image: {{ .Values.scheduler.patch.imageNew }}
{{- else }}
image: {{ .Values.scheduler.patch.image }}
{{- end }}
imagePullPolicy: {{ .Values.scheduler.patch.imagePullPolicy }}
args:
- patch
- --webhook-name={{ include "hami-vgpu.scheduler.webhook" . }}
- --namespace={{ .Release.Namespace }}
- --patch-validating=false
- --secret-name={{ include "hami-vgpu.scheduler.tls" . }}
restartPolicy: OnFailure
serviceAccountName: {{ include "hami-vgpu.fullname" . }}-admission
{{- if .Values.scheduler.patch.nodeSelector }}
nodeSelector: {{ toYaml .Values.scheduler.patch.nodeSelector | nindent 8 }}
{{- end }}
{{- if .Values.scheduler.patch.tolerations }}
tolerations: {{ toYaml .Values.scheduler.patch.tolerations | nindent 8 }}
{{- end }}
securityContext:
runAsNonRoot: true
runAsUser: {{ .Values.scheduler.patch.runAsUser }}

View File

@@ -0,0 +1,36 @@
{{- if .Values.podSecurityPolicy.enabled }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ include "hami-vgpu.fullname" . }}-admission
annotations:
"helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
labels:
{{- include "hami-vgpu.labels" . | nindent 4 }}
app.kubernetes.io/component: admission-webhook
spec:
allowPrivilegeEscalation: false
fsGroup:
ranges:
- max: 65535
min: 1
rule: MustRunAs
requiredDropCapabilities:
- ALL
runAsUser:
rule: MustRunAsNonRoot
seLinux:
rule: RunAsAny
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
{{- end }}

View File

@@ -0,0 +1,18 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "hami-vgpu.fullname" . }}-admission
annotations:
"helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
labels:
{{- include "hami-vgpu.labels" . | nindent 4 }}
app.kubernetes.io/component: admission-webhook
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- create

View File

@@ -0,0 +1,18 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "hami-vgpu.fullname" . }}-admission
annotations:
"helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
labels:
{{- include "hami-vgpu.labels" . | nindent 4 }}
app.kubernetes.io/component: admission-webhook
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "hami-vgpu.fullname" . }}-admission
subjects:
- kind: ServiceAccount
name: {{ include "hami-vgpu.fullname" . }}-admission
namespace: {{ .Release.Namespace | quote }}

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "hami-vgpu.fullname" . }}-admission
annotations:
"helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
labels:
{{- include "hami-vgpu.labels" . | nindent 4 }}
app.kubernetes.io/component: admission-webhook

View File

@@ -0,0 +1,15 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "hami-vgpu.scheduler" . }}
labels:
app.kubernetes.io/component: "hami-scheduler"
{{- include "hami-vgpu.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: {{ include "hami-vgpu.scheduler" . }}
namespace: {{ .Release.Namespace | quote }}

View File

@@ -0,0 +1,33 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "hami-vgpu.scheduler" . }}
labels:
app.kubernetes.io/component: hami-scheduler
{{- include "hami-vgpu.labels" . | nindent 4 }}
{{- if .Values.scheduler.service.labels }}
{{ toYaml .Values.scheduler.service.labels | indent 4 }}
{{- end }}
{{- if .Values.scheduler.service.annotations }}
annotations: {{ toYaml .Values.scheduler.service.annotations | nindent 4 }}
{{- end }}
spec:
type: {{ .Values.scheduler.service.type | default "NodePort" }} # Default type is NodePort
ports:
- name: http
port: {{ .Values.scheduler.service.httpPort | default 443 }} # Default HTTP port is 443
targetPort: {{ .Values.scheduler.service.httpTargetPort | default 443 }}
{{- if eq (.Values.scheduler.service.type | default "NodePort") "NodePort" }} # If type is NodePort, set nodePort
nodePort: {{ .Values.scheduler.service.schedulerPort | default 31998 }}
{{- end }}
protocol: TCP
- name: monitor
port: {{ .Values.scheduler.service.monitorPort | default 31993 }} # Default monitoring port is 31993
targetPort: {{ .Values.scheduler.service.monitorTargetPort | default 31993 }}
{{- if eq (.Values.scheduler.service.type | default "NodePort") "NodePort" }} # If type is NodePort, set nodePort
nodePort: {{ .Values.scheduler.service.monitorPort | default 31993 }}
{{- end }}
protocol: TCP
selector:
app.kubernetes.io/component: hami-scheduler
{{- include "hami-vgpu.selectorLabels" . | nindent 4 }}

View File

@@ -0,0 +1,8 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "hami-vgpu.scheduler" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
app.kubernetes.io/component: "hami-scheduler"
{{- include "hami-vgpu.labels" . | nindent 4 }}

View File

@@ -0,0 +1,51 @@
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: {{ include "hami-vgpu.scheduler.webhook" . }}
webhooks:
- admissionReviewVersions:
- v1beta1
clientConfig:
{{- if .Values.scheduler.admissionWebhook.customURL.enabled }}
url: https://{{ .Values.scheduler.admissionWebhook.customURL.host}}:{{.Values.scheduler.admissionWebhook.customURL.port}}{{.Values.scheduler.admissionWebhook.customURL.path}}
{{- else }}
service:
name: {{ include "hami-vgpu.scheduler" . }}
namespace: {{ .Release.Namespace }}
path: /webhook
port: {{ .Values.scheduler.service.httpPort }}
{{- end }}
failurePolicy: {{ .Values.scheduler.admissionWebhook.failurePolicy }}
matchPolicy: Equivalent
name: vgpu.hami.io
namespaceSelector:
matchExpressions:
- key: hami.io/webhook
operator: NotIn
values:
- ignore
{{- if .Values.scheduler.admissionWebhook.whitelistNamespaces }}
- key: kubernetes.io/metadata.name
operator: NotIn
values:
{{- toYaml .Values.scheduler.admissionWebhook.whitelistNamespaces | nindent 10 }}
{{- end }}
objectSelector:
matchExpressions:
- key: hami.io/webhook
operator: NotIn
values:
- ignore
reinvocationPolicy: {{ .Values.scheduler.admissionWebhook.reinvocationPolicy }}
rules:
- apiGroups:
- ""
apiVersions:
- v1
operations:
- CREATE
resources:
- pods
scope: '*'
sideEffects: None
timeoutSeconds: 10

View File

@@ -0,0 +1,219 @@
# Default values for hami-vgpu.
nameOverride: ""
fullnameOverride: ""
imagePullSecrets: [ ]
version: "v2.5.0"
#Nvidia GPU Parameters
resourceName: "nvidia.com/gpu"
resourceMem: "nvidia.com/gpumem"
resourceMemPercentage: "nvidia.com/gpumem-percentage"
resourceCores: "nvidia.com/gpucores"
resourcePriority: "nvidia.com/priority"
#MLU Parameters
mluResourceName: "cambricon.com/vmlu"
mluResourceMem: "cambricon.com/mlu.smlu.vmemory"
mluResourceCores: "cambricon.com/mlu.smlu.vcore"
#Hygon DCU Parameters
dcuResourceName: "hygon.com/dcunum"
dcuResourceMem: "hygon.com/dcumem"
dcuResourceCores: "hygon.com/dcucores"
#Iluvatar GPU Parameters
iluvatarResourceName: "iluvatar.ai/vgpu"
iluvatarResourceMem: "iluvatar.ai/vcuda-memory"
iluvatarResourceCore: "iluvatar.ai/vcuda-core"
#Metax SGPU Parameters
metaxResourceName: "metax-tech.com/sgpu"
metaxResourceCore: "metax-tech.com/vcore"
metaxResourceMem: "metax-tech.com/vmemory"
schedulerName: "hami-scheduler"
podSecurityPolicy:
enabled: false
global:
gpuHookPath: /usr/local
labels: {}
annotations: {}
managedNodeSelectorEnable: false
managedNodeSelector:
usage: "gpu"
scheduler:
# @param nodeName defines the node name and the nvidia-vgpu-scheduler-scheduler will schedule to the node.
# if we install the nvidia-vgpu-scheduler-scheduler as default scheduler, we need to remove the k8s default
# scheduler pod from the cluster first, we must specify node name to skip the schedule workflow.
nodeName: ""
#nodeLabelSelector:
# "gpu": "on"
overwriteEnv: "false"
defaultSchedulerPolicy:
nodeSchedulerPolicy: binpack
gpuSchedulerPolicy: spread
metricsBindAddress: ":9395"
livenessProbe: false
leaderElect: true
# when leaderElect is true, replicas is available, otherwise replicas is 1.
replicas: 1
kubeScheduler:
# @param enabled indicate whether to run kube-scheduler container in the scheduler pod, it's true by default.
enabled: true
image: registry.k8s.io/kube-scheduler
imageTag: ""
imagePullPolicy: IfNotPresent
resources: {}
# If you do want to specify resources, uncomment the following lines, adjust them as necessary.
# and remove the curly braces after 'resources:'.
# limits:
# cpu: 1000m
# memory: 1000Mi
# requests:
# cpu: 100m
# memory: 100Mi
extraNewArgs:
- --config=/config/config.yaml
- -v=4
extraArgs:
- --policy-config-file=/config/config.json
- -v=4
extender:
image: "beclab/hami"
imagePullPolicy: IfNotPresent
resources: {}
# If you do want to specify resources, uncomment the following lines, adjust them as necessary,
# and remove the curly braces after 'resources:'.
# limits:
# cpu: 1000m
# memory: 1000Mi
# requests:
# cpu: 100m
# memory: 100Mi
extraArgs:
- --debug
- -v=4
podAnnotations: {}
tolerations: []
#serviceAccountName: "hami-vgpu-scheduler-sa"
admissionWebhook:
customURL:
enabled: false
# must be an endpoint using https.
# should generate host certs here
host: 127.0.0.1 # hostname or ip, can be your node'IP if you want to use https://<nodeIP>:<schedulerPort>/<path>
port: 31998
path: /webhook
whitelistNamespaces:
# Specify the namespaces that the webhook will not be applied to.
# - default
# - kube-system
# - istio-system
reinvocationPolicy: Never
failurePolicy: Ignore
patch:
image: jettech/kube-webhook-certgen:v1.5.2
imageNew: liangjw/kube-webhook-certgen:v1.1.1
imagePullPolicy: IfNotPresent
priorityClassName: ""
podAnnotations: {}
nodeSelector: {}
tolerations: []
runAsUser: 2000
service:
type: NodePort # Default type is NodePort, can be changed to ClusterIP
httpPort: 443 # HTTP port
schedulerPort: 31998 # NodePort for HTTP
monitorPort: 31993 # Monitoring port
labels: {}
annotations: {}
devicePlugin:
image: "beclab/hami"
monitorimage: "beclab/hami"
monitorctrPath: /usr/local/vgpu/containers
imagePullPolicy: IfNotPresent
deviceSplitCount: 100
deviceMemoryScaling: 100
deviceCoreScaling: 100
runtimeClassName: ""
migStrategy: "none"
disablecorelimit: "false"
passDeviceSpecsEnabled: false
extraArgs:
- -v=4
service:
type: NodePort # Default type is NodePort, can be changed to ClusterIP
httpPort: 31992
labels: {}
annotations: {}
pluginPath: /var/lib/kubelet/device-plugins
libPath: /usr/local/vgpu
podAnnotations: {}
nvidianodeSelector:
gpu.bytetrade.io/cuda-supported: 'true'
tolerations: []
# The updateStrategy for DevicePlugin DaemonSet.
# If you want to update the DaemonSet by manual, set type as "OnDelete".
# We recommend use OnDelete update strategy because DevicePlugin pod restart will cause business pod restart, this behavior is destructive.
# Otherwise, you can use RollingUpdate update strategy to rolling update DevicePlugin pod.
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
resources: {}
# If you do want to specify resources, uncomment the following lines, adjust them as necessary.
# and remove the curly braces after 'resources:'.
# limits:
# cpu: 1000m
# memory: 1000Mi
# requests:
# cpu: 100m
# memory: 100Mi
vgpuMonitor:
resources: {}
# If you do want to specify resources, uncomment the following lines, adjust them as necessary.
# and remove the curly braces after 'resources:'.
# limits:
# cpu: 1000m
# memory: 1000Mi
# requests:
# cpu: 100m
# memory: 100Mi
devices:
mthreads:
enabled: false
customresources:
- mthreads.com/vgpu
nvidia:
gpuCorePolicy: default
ascend:
enabled: false
image: ""
imagePullPolicy: IfNotPresent
extraArgs: []
nodeSelector:
ascend: "on"
tolerations: []
customresources:
- huawei.com/Ascend910A
- huawei.com/Ascend910A-memory
- huawei.com/Ascend910B2
- huawei.com/Ascend910B2-memory
- huawei.com/Ascend910B
- huawei.com/Ascend910B-memory
- huawei.com/Ascend910B4
- huawei.com/Ascend910B4-memory
- huawei.com/Ascend310P
- huawei.com/Ascend310P-memory

View File

@@ -1,4 +0,0 @@
gpu:
server: 'host:30123'

View File

@@ -54,7 +54,7 @@ spec:
properties:
appid:
description: the unique id of the application for sys application
appid equal name otherwise appid equal md5(name)[:8]
appid equal name otherwise appid equal md5(name)[:8]
type: string
deployment:
description: the deployment of the application
@@ -141,6 +141,26 @@ spec:
type: string
description: the extend settings of the application
type: object
tailscaleAcls:
items:
properties:
action:
type: string
dst:
items:
type: string
type: array
proto:
type: string
src:
items:
type: string
type: array
required:
- dst
- proto
type: object
type: array
required:
- appid
- isSysApp

View File

@@ -146,9 +146,10 @@ spec:
spec:
serviceAccountName: os-internal
serviceAccount: os-internal
priorityClassName: "system-cluster-critical"
containers:
- name: app-service
image: beclab/app-service:0.2.58
image: beclab/app-service:0.3.7
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
@@ -172,7 +173,7 @@ spec:
- name: UPLOAD_CONTAINER_IMAGE
value: "beclab/upload:v1.0.3"
- name: JOB_IMAGE
value: "beclab/upgrade-job:0.1.5"
value: "beclab/upgrade-job:0.1.7"
- name: SHARED_LIB_PATH
value: {{ .Values.sharedlib }}
- name: CLUSTER_CPU_THRESHOLD
@@ -201,6 +202,8 @@ spec:
name: certs
- mountPath: /etc/containerd/config.toml
name: configtoml
- mountPath: /Cache
name: app-cache
initContainers:
- name: generate-certs
image: beclab/openssl:v3
@@ -224,6 +227,10 @@ spec:
- name: certs
mountPath: /etc/certs
volumes:
- name: app-cache
hostPath:
path: {{ .Values.rootPath }}/userdata/Cache
type: DirectoryOrCreate
- name: configtoml
hostPath:
path: /etc/containerd/config.toml
@@ -360,7 +367,7 @@ spec:
hostNetwork: true
containers:
- name: image-service
image: beclab/image-service:0.2.51
image: beclab/image-service:0.2.66
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0

View File

@@ -199,6 +199,12 @@ spec:
metadata:
labels:
tier: bfl
annotations:
instrumentation.opentelemetry.io/inject-go: "olares-instrumentation"
instrumentation.opentelemetry.io/go-container-names: "api"
instrumentation.opentelemetry.io/otel-go-auto-target-exe: "/bfl-api"
# instrumentation.opentelemetry.io/inject-nginx: "olares-instrumentation"
# instrumentation.opentelemetry.io/inject-nginx-container-names: "ingress"
spec:
{{ if .Values.bfl.admin_user }}
affinity:
@@ -215,6 +221,7 @@ spec:
weight: 10
{{ end }}
serviceAccountName: bytetrade-controller
priorityClassName: "system-cluster-critical"
initContainers:
- name: init-userspace
image: busybox:1.28
@@ -242,7 +249,7 @@ spec:
containers:
- name: api
image: beclab/bfl:v0.3.59
image: beclab/bfl:v0.4.1
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 1000
@@ -295,7 +302,7 @@ spec:
value: {{ .Values.bfl.terminus_dns_service_api }}
- name: ingress
image: beclab/bfl-ingress:v0.2.18
image: beclab/bfl-ingress:v0.3.1
imagePullPolicy: IfNotPresent
volumeMounts:
- name: ngxlog

View File

@@ -44,9 +44,10 @@ spec:
spec:
serviceAccountName: bytetrade-sys-ops
serviceAccount: bytetrade-sys-ops
priorityClassName: "system-cluster-critical"
containers:
- name: system-server
image: beclab/system-server:0.1.19
image: beclab/system-server:0.1.21
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80

View File

@@ -0,0 +1,133 @@
{{- $namespace := printf "%s" "os-system" -}}
{{ $lldap_rootpath := "/olares/userdata/dbdata" }}
{{- $lldap_secret := (lookup "v1" "Secret" $namespace "lldap-credentials") -}}
{{- $lldap_jwt_secret := "" -}}
{{- $lldap_ldap_user_pass := "" -}}
{{- $lldap_key_seed := "" -}}
{{ if $lldap_secret -}}
{{ $lldap_jwt_secret = (index $lldap_secret "data" "lldap-jwt-secret") }}
{{ $lldap_ldap_user_pass = (index $lldap_secret "data" "lldap-ldap-user-pass") }}
{{ $lldap_key_seed = (index $lldap_secret "data" "lldap-key-seed") }}
{{ else -}}
{{ $lldap_jwt_secret = randAlpha 64 | b64enc }}
{{ $lldap_ldap_user_pass = randAlpha 64 | b64enc }}
{{ $lldap_key_seed = randAlpha 64 | b64enc }}
{{- end -}}
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
lldap: https://github.com/nitnelave/lldap
k8s: https://github.com/Evantage-WS/lldap-kubernetes
labels:
app: lldap
applications.app.bytetrade.io/author: bytetrade.io
name: lldap
namespace: os-system
spec:
replicas: 1
selector:
matchLabels:
app: lldap
strategy:
type: Recreate
template:
metadata:
annotations:
lldap: https://github.com/nitnelave/lldap
k8s: https://github.com/Evantage-WS/lldap-kubernetes
labels:
app: lldap
spec:
containers:
- env:
- name: GID
value: "1001"
- name: LLDAP_JWT_SECRET
valueFrom:
secretKeyRef:
name: lldap-credentials
key: lldap-jwt-secret
- name: LLDAP_LDAP_BASE_DN
valueFrom:
secretKeyRef:
name: lldap-credentials
key: base-dn
- name: LLDAP_LDAP_USER_DN
valueFrom:
secretKeyRef:
name: lldap-credentials
key: lldap-ldap-user-dn
- name: LLDAP_LDAP_USER_PASS
valueFrom:
secretKeyRef:
name: lldap-credentials
key: lldap-ldap-user-pass
- name: LLDAP_KEY_SEED
valueFrom:
secretKeyRef:
name: lldap-credentials
key: lldap-key-seed
- name: LLDAP_DATABASE_URL
value: "sqlite:///data/users.db?mode=rwc"
- name: TZ
value: CET
- name: UID
value: "1001"
- name: LLDAP_KEY_FILE
value: "/data/private_key"
- name: RUST_BACKTRACE
value: "full"
image: beclab/lldap:0.0.1
imagePullPolicy: IfNotPresent
name: lldap
ports:
- containerPort: 3890
- containerPort: 17170
volumeMounts:
- mountPath: /data
name: lldap-data
restartPolicy: Always
volumes:
- name: lldap-data
hostPath:
type: DirectoryOrCreate
path: {{ $lldap_rootpath }}/lldap
---
apiVersion: v1
kind: Service
metadata:
annotations:
lldap: https://github.com/nitnelave/lldap
k8s: https://github.com/Evantage-WS/lldap-kubernetes
labels:
app: lldap-service
name: lldap-service
namespace: os-system
spec:
ports:
- name: "3890"
port: 3890
targetPort: 3890
- name: "17170"
port: 17170
targetPort: 17170
selector:
app: lldap
---
apiVersion: v1
data:
base-dn: ZGM9ZXhhbXBsZSxkYz1jb20=
lldap-jwt-secret: {{ $lldap_jwt_secret }}
lldap-ldap-user-dn: YWRtaW4=
lldap-ldap-user-pass: {{ $lldap_ldap_user_pass }}
lldap-key-seed: {{ $lldap_key_seed }}
kind: Secret
metadata:
name: lldap-credentials
namespace: os-system
type: Opaque

View File

@@ -99,7 +99,7 @@ spec:
- name: DISABLE_TELEMETRY
value: "false"
- name: operator-api
image: beclab/middleware-operator:0.1.37
image: beclab/middleware-operator:0.2.1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080

View File

@@ -247,6 +247,24 @@ spec:
app.kubernetes.io/name: nats
app.kubernetes.io/instance: nats
spec:
initContainers:
- name: generate-config
image: busybox:1.28
command:
- sh
- -c
- |
if [ ! -f /data/config/nats.conf ]; then
cat /etc/nats-config/nats.conf > /data/config/nats.conf
else
echo "nats config file already exists"
fi
volumeMounts:
- mountPath: /etc/nats-config
name: config
readOnly: false
- mountPath: /data
name: nats-data
containers:
- args:
- --config

View File

@@ -43,7 +43,7 @@ spec:
chown -R 1000:1000 /data
containers:
- name: tapr-images-uploader
image: beclab/images-uploader:0.1.2
image: beclab/images-uploader:0.2.0
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false

View File

@@ -0,0 +1,14 @@
#!/usr/bin/env bash
set -o pipefail
set -xe
curl -Lo wsl.2.3.26.0.amd64.msi https://github.com/microsoft/WSL/releases/download/2.3.26/wsl.2.3.26.0.x64.msi
wsl_2_3_26=$(md5sum wsl.2.3.26.0.amd64.msi|awk '{print $1}')
aws s3 cp wsl.2.3.26.0.amd64.msi s3://terminus-os-install/${wsl_2_3_26} --acl=public-read
curl -Lo wsl.2.3.26.0.arm64.msi https://github.com/microsoft/WSL/releases/download/2.3.26/wsl.2.3.26.0.arm64.msi
wsl_2_3_26_arm64=$(md5sum wsl.2.3.26.0.arm64.msi|awk '{print $1}')
aws s3 cp wsl.2.3.26.0.arm64.msi s3://terminus-os-install/arm64/${wsl_2_3_26_arm64} --acl=public-read

View File

@@ -24,6 +24,10 @@ for mod in "${PACKAGE_MODULE[@]}";do
bash ${BASE_DIR}/yaml2prop.sh -f $p | while read l;do
if [[ "$l" == *".image = "* ]]; then
echo "$l"
if [[ $(echo "$l" | awk '{print $3}') == "value" ]]; then
echo "ignoring template value"
continue
fi
echo "$l" >> ${TMP_MANIFEST}
fi;
done

View File

@@ -34,14 +34,28 @@ for deps in "components" "pkgs"; do
name=$(echo -n "$filename"|md5sum|awk '{print $1}')
checksum="$name.checksum.txt"
md5sum $name > $checksum
backup_file=$(awk '{print $1}' $checksum)
if [ x"$backup_file" == x"" ]; then
echo "invalid checksum"
exit 1
fi
curl -fsSLI https://dc3p1870nn3cj.cloudfront.net/$path$name > /dev/null
if [ $? -ne 0 ]; then
set -ex
aws s3 cp $name s3://terminus-os-install/$path$name --acl=public-read
aws s3 cp $checksum s3://terminus-os-install/$path$checksum --acl=public-read
echo "upload $name to s3 completed"
set +ex
code=$(curl -o /dev/null -fsSLI -w "%{http_code}" https://dc3p1870nn3cj.cloudfront.net/$path$name.tar.gz)
if [ $code -eq 403 ]; then
set -ex
aws s3 cp $name s3://terminus-os-install/$path$name --acl=public-read
aws s3 cp $name s3://terminus-os-install/backup/$path$backup_file --acl=public-read
aws s3 cp $checksum s3://terminus-os-install/$path$checksum --acl=public-read
echo "upload $name to s3 completed"
set +ex
else
if [ $code -ne 200 ]; then
echo "failed to check image"
exit -1
fi
fi
fi
# upload to tencent cloud cos

View File

@@ -13,18 +13,33 @@ cat $1|while read image; do
curl -fsSLI https://dc3p1870nn3cj.cloudfront.net/$path$name.tar.gz > /dev/null
if [ $? -ne 0 ]; then
set -e
docker pull $image
docker save $image -o $name.tar
gzip $name.tar
code=$(curl -o /dev/null -fsSLI -w "%{http_code}" https://dc3p1870nn3cj.cloudfront.net/$path$name.tar.gz)
if [ $code -eq 403 ]; then
set -ex
docker pull $image
docker save $image -o $name.tar
gzip $name.tar
md5sum $name.tar.gz > $checksum
md5sum $name.tar.gz > $checksum
backup_file=$(awk '{print $1}' $checksum)
if [ x"$backup_file" == x"" ]; then
echo "invalid checksum"
exit 1
fi
aws s3 cp $name.tar.gz s3://terminus-os-install/$path$name.tar.gz --acl=public-read
aws s3 cp $checksum s3://terminus-os-install/$path$checksum --acl=public-read
echo "upload $name completed"
set +e
echo "start to upload [$name.tar.gz]"
aws s3 cp $name.tar.gz s3://terminus-os-install/$path$name.tar.gz --acl=public-read
aws s3 cp $name.tar.gz s3://terminus-os-install/backup/$path$backup_file --acl=public-read
aws s3 cp $checksum s3://terminus-os-install/$path$checksum --acl=public-read
echo "upload $name completed"
set +ex
else
if [ $code -ne 200 ]; then
echo "failed to check image"
exit -1
fi
fi
fi
@@ -32,17 +47,31 @@ cat $1|while read image; do
# re-upload checksum.txt
curl -fsSLI https://dc3p1870nn3cj.cloudfront.net/$path$checksum > /dev/null
if [ $? -ne 0 ]; then
set -e
docker pull $image
docker save $image -o $name.tar
gzip $name.tar
code=$(curl -o /dev/null -fsSLI -w "%{http_code}" https://dc3p1870nn3cj.cloudfront.net/$path$checksum)
if [ $code -eq 403 ]; then
set -ex
docker pull $image
docker save $image -o $name.tar
gzip $name.tar
md5sum $name.tar.gz > $checksum
md5sum $name.tar.gz > $checksum
backup_file=$(awk '{print $1}' $checksum)
if [ x"$backup_file" == x"" ]; then
echo "invalid checksum"
exit 1
fi
aws s3 cp $name.tar.gz s3://terminus-os-install/$path$name.tar.gz --acl=public-read
aws s3 cp $checksum s3://terminus-os-install/$path$checksum --acl=public-read
echo "upload $name completed"
set +e
aws s3 cp $name.tar.gz s3://terminus-os-install/$path$name.tar.gz --acl=public-read
aws s3 cp $name.tar.gz s3://terminus-os-install/backup/$path$backup_file --acl=public-read
aws s3 cp $checksum s3://terminus-os-install/$path$checksum --acl=public-read
echo "upload $name completed"
set +ex
else
if [ $code -ne 200 ]; then
echo "failed to check image"
exit -1
fi
fi
fi
# upload to tencent cloud cos

View File

@@ -87,6 +87,28 @@ data:
# authentication_backend:
# file:
# path: /config/users_database.yml
authentication_backend:
password_reset:
disable: false
refresh_interval: 5m
lldap:
implementation: custom
url: ldap://lldap-service:3890
timeout: 5s
start_tls: false
base_dn: dc=example,dc=com
additional_users_dn: ou=users
users_filter: (&({username_attribute}={input})(objectClass=person))
additional_groups_dn: ou=groups
groups_filter: "(member={dn})"
group_name_attribute: cn
mail_attribute: mail
display_name_attribute: displayName
username_attribute: uid
server: lldap-service
port: 17170
user: cn=admin,dc=example,dc=com
password: adminpassword
access_control:
config_type: terminus
@@ -306,6 +328,7 @@ spec:
spec:
serviceAccountName: os-internal
serviceAccount: os-internal
priorityClassName: "system-cluster-critical"
initContainers:
- name: init-container
image: 'postgres:16.0-alpine3.18'
@@ -337,7 +360,7 @@ spec:
containers:
- name: authelia
image: beclab/auth:0.1.41
image: beclab/auth:0.1.43
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9091
@@ -423,6 +446,7 @@ spec:
labels:
app: redis
spec:
priorityClassName: "system-cluster-critical"
containers:
- name: redis
image: redis:6.2.13-alpine3.18

View File

@@ -28,7 +28,7 @@ spec:
name: check-auth
containers:
- name: auth-front
image: beclab/login:v0.1.33
image: beclab/login:v0.1.39
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80

View File

@@ -1,4 +1,42 @@
{{- $namespace := printf "%s%s" "user-system-" .Values.bfl.username -}}
{{- $headscale_secret := (lookup "v1" "Secret" $namespace "headscale-secrets") -}}
{{- $pg_password := "" -}}
{{ if $headscale_secret -}}
{{ $pg_password = (index $headscale_secret "data" "pg_password") }}
{{ else -}}
{{ $pg_password = randAlphaNum 16 | b64enc }}
{{- end -}}
---
apiVersion: v1
kind: Secret
metadata:
name: headscale-secrets
namespace: user-system-{{ .Values.bfl.username }}
type: Opaque
data:
pg_password: {{ $pg_password }}
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: headscale-pg
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: headscale
appNamespace: {{ .Release.Namespace }}
middleware: postgres
postgreSQL:
user: headscale_{{ .Values.bfl.username }}
password:
valueFrom:
secretKeyRef:
key: pg_password
name: headscale-secrets
databases:
- name: headscale
---
apiVersion: v1
@@ -36,8 +74,6 @@ spec:
selector:
matchLabels:
app: headscale
strategy:
type: Recreate
template:
metadata:
labels:
@@ -68,7 +104,7 @@ spec:
- |
chown -R 1000:1000 /headscale
- name: init
image: beclab/headscale-init:v0.1.7
image: beclab/headscale-init:v0.1.9
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
@@ -79,9 +115,39 @@ spec:
{{- end }}
- name: NAMESPACE
value: bfl.user-space-{{ .Values.bfl.username }}
- name: PG_HOST
value: citus-master-svc.user-system-{{ .Values.bfl.username }}
- name: PG_PORT
value: "5432"
- name: PG_USER
value: headscale_{{ .Values.bfl.username }}
- name: PG_PASS
value: "{{ $pg_password | b64dec }}"
- name: PG_DB
value: user_space_{{ .Values.bfl.username }}_headscale
volumeMounts:
- name: config
mountPath: /etc/headscale
- name: wait-for-postgres
image: postgres:16.0-alpine3.18
command:
- sh
- '-c'
- >-
echo -e "Checking for the availability of PostgreSQL Server deployment"; until psql -h $PGHOST -p $PGPORT -U $PGUSER -d $PGDB
-c "SELECT 1"; do sleep 1; printf "-"; done; echo -e " >> PostgreSQL DB Server has started";
env:
- name: PGHOST
value: citus-master-svc.user-system-{{ .Values.bfl.username }}
- name: PGPORT
value: "5432"
- name: PGUSER
value: headscale_{{ .Values.bfl.username }}
- name: PGPASSWORD
value: "{{ $pg_password | b64dec }}"
- name: PGDB
value: user_space_{{ .Values.bfl.username }}_headscale
imagePullPolicy: IfNotPresent
containers:
- name: headscale
image: headscale/headscale:0.22.3
@@ -109,6 +175,9 @@ spec:
mountPath: /etc/headscale
- name: headscale-data
mountPath: /var/lib/headscale
- name: acl-config
mountPath: /etc/headscale/acl
readOnly: true
ports:
- containerPort: 8080
- args:
@@ -141,6 +210,13 @@ spec:
hostPath:
type: DirectoryOrCreate
path: {{ .Values.userspace.appCache }}/headscale
- name: acl-config
configMap:
defaultMode: 420
items:
- key: acl.json
path: acl.json
name: tailscale-acl
---
apiVersion: apps/v1
@@ -198,7 +274,7 @@ spec:
- name: TS_STATE_DIR
value: "/var/lib/tailscale/"
- name: TS_TAILSCALED_EXTRA_ARGS
value: "--no-logs-no-support --verbose=1"
value: "--no-logs-no-support --verbose=1"
- name: TS_ROUTES
value: $(NODE_IP)/32
- name: TS_EXTRA_ARGS
@@ -283,3 +359,26 @@ spec:
version: v1
status:
state: active
---
apiVersion: v1
data:
acl.json: |
{
"acls":[
{ "action": "accept", "src": ["*"], "proto": "tcp", "dst": ["*:443"] }
],
"autoApprovers": {
"routes": {
"10.0.0.0/8": ["default"],
"172.16.0.0/12": ["default"],
"192.168.0.0/16": ["default"]
},
"exitNode": []
}
}
kind: ConfigMap
metadata:
name: tailscale-acl
namespace: user-space-{{ .Values.bfl.username }}

View File

@@ -0,0 +1,411 @@
{{- $postgres_secret := (lookup "v1" "Secret" .Release.Namespace "infisical-postgres") -}}
{{- $backend_secret := (lookup "v1" "Secret" .Release.Namespace "infisical-backend") -}}
{{- $postgres_password := randAlphaNum 16 | b64enc -}}
{{- $redis_password := randAlphaNum 16 | b64enc -}}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
rules:
- apiGroups:
- "*"
resources:
- secrets
- applicationpermissions
verbs:
- get
- list
metadata:
name: {{ .Release.Namespace }}:vault-role
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: infisical-sa
namespace: {{ .Release.Namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ .Release.Namespace }}:vault-rb
subjects:
- kind: ServiceAccount
namespace: {{ .Release.Namespace }}
name: infisical-sa
roleRef:
kind: ClusterRole
name: {{ .Release.Namespace }}:vault-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ .Release.Namespace }}:vault-ro-user-rb
subjects:
- kind: ServiceAccount
namespace: {{ .Release.Namespace }}
name: infisical-sa
roleRef:
kind: ClusterRole
name: tapr-images-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Secret
metadata:
name: infisical-postgres
namespace: {{ .Release.Namespace }}
type: Opaque
{{ if $postgres_secret -}}
data:
postgres-passwords: {{ index $postgres_secret "data" "postgres-passwords" }}
redis-passwords: {{ index $postgres_secret "data" "redis-passwords" }}
{{ else -}}
data:
postgres-passwords: {{ $postgres_password }}
redis-passwords: {{ $redis_password }}
{{ end }}
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: infisical-postgres
namespace: {{ .Release.Namespace }}
spec:
app: infisical
appNamespace: {{ .Release.Namespace }}
middleware: postgres
postgreSQL:
user: infisical_os_system
password:
valueFrom:
secretKeyRef:
key: postgres-passwords
name: infisical-postgres
databases:
- name: infisical
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: infisical-redis
namespace: {{ .Release.Namespace }}
spec:
app: infisical
appNamespace: {{ .Release.Namespace }}
middleware: redis
redis:
password:
valueFrom:
secretKeyRef:
key: redis-passwords
name: infisical-postgres
namespace: infisical
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: infisical-deployment
namespace: {{ .Release.Namespace }}
labels:
app: infisical
applications.app.bytetrade.io/author: bytetrade.io
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: infisical
template:
metadata:
labels:
app: infisical
io.bytetrade.app: "true"
spec:
serviceAccountName: infisical-sa
priorityClassName: "system-cluster-critical"
initContainers:
- name: init-container
image: 'postgres:16.0-alpine3.18'
command:
- sh
- '-c'
- >-
echo -e "Checking for the availability of PostgreSQL Server deployment"; until psql -h $PGHOST -p $PGPORT -U $PGUSER -d $PGDB -c "SELECT 1"; do sleep 1; printf "-"; done; sleep 5; echo -e " >> PostgreSQL DB Server has started";
env:
- name: PGHOST
value: citus-0.citus-headless.os-system
- name: PGPORT
value: "5432"
- name: PGUSER
value: infisical_os_system
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: infisical-postgres
key: postgres-passwords
- name: PGDB
value: os_system_infisical
- name: "migration-init"
image: "beclab/infisical:0.1.1"
imagePullPolicy: IfNotPresent
command: ["npm", "run", "migration:latest"]
envFrom:
- secretRef:
name: infisical-backend
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: infisical-postgres
key: postgres-passwords
- name: POSTGRES_USER
value: infisical_os_system
- name: POSTGRES_DB
value: os_system_infisical
- name: DB_CONNECTION_URI
value: "postgres://$(POSTGRES_USER):$(POSTGRES_PASSWORD)@citus-0.citus-headless.os-system/$(POSTGRES_DB)?sslmode=disable"
containers:
- name: infisical
image: "beclab/infisical:0.1.1"
imagePullPolicy: IfNotPresent
readinessProbe:
httpGet:
path: /api/status
port: 4000
initialDelaySeconds: 10
periodSeconds: 10
ports:
- containerPort: 4000
envFrom:
- secretRef:
name: infisical-backend
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: infisical-postgres
key: postgres-passwords
- name: POSTGRES_USER
value: infisical_os_system
- name: POSTGRES_DB
value: os_system_infisical
- name: DB_CONNECTION_URI
value: "postgres://$(POSTGRES_USER):$(POSTGRES_PASSWORD)@citus-0.citus-headless.os-system/$(POSTGRES_DB)?sslmode=disable"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: infisical-postgres
key: redis-passwords
- name: REDIS_URL
value: "redis://:$(REDIS_PASSWORD)@redis-cluster-proxy.os-system:6379/0"
- name: infisical-proxy
image: nginx:stable-alpine3.17-slim
imagePullPolicy: IfNotPresent
ports:
- name: proxy
containerPort: 8088
volumeMounts:
- name: nginx-config
readOnly: true
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
- name: tapr-sidecar
image: beclab/secret-vault:0.1.9
imagePullPolicy: IfNotPresent
ports:
- name: proxy
containerPort: 8080
env:
- name: INFISICAL_URL
value: http://localhost:4000
- name: PG_USER
value: infisical_os_system
- name: PG_DB
value: os_system_infisical
- name: PG_ADDR
value: citus-0.citus-headless.os-system
- name: PASSWORD
valueFrom:
secretKeyRef:
name: infisical-backend
key: SECRET_KEY
volumes:
- name: nginx-config
configMap:
name: infisical-nginx-conf
items:
- key: nginx.conf
path: nginx.conf
---
apiVersion: v1
kind: Service
metadata:
name: infisical-service
namespace: {{ .Release.Namespace }}
spec:
selector:
app: infisical
type: ClusterIP
ports:
- protocol: TCP
name: infisical
port: 80
targetPort: 8088
- protocol: TCP
name: sidecar
port: 8080
targetPort: 8080
---
apiVersion: v1
kind: Secret
metadata:
name: infisical-backend
namespace: {{ .Release.Namespace }}
type: Opaque
{{ if $backend_secret -}}
data:
ENCRYPTION_KEY: {{ $backend_secret.data.ENCRYPTION_KEY }}
#INVITE_ONLY_SIGNUP: {{ $backend_secret.data.INVITE_ONLY_SIGNUP }}
JWT_AUTH_SECRET: {{ $backend_secret.data.JWT_AUTH_SECRET }}
JWT_MFA_SECRET: {{ $backend_secret.data.JWT_MFA_SECRET }}
JWT_REFRESH_SECRET: {{ $backend_secret.data.JWT_REFRESH_SECRET }}
JWT_SERVICE_SECRET: {{ $backend_secret.data.JWT_SERVICE_SECRET }}
JWT_SIGNUP_SECRET: {{ $backend_secret.data.JWT_SIGNUP_SECRET }}
SITE_URL: {{ $backend_secret.data.SITE_URL }}
#SMTP_FROM_ADDRESS: {{ $backend_secret.data.SMTP_FROM_ADDRESS }}
SMTP_FROM_NAME: {{ $backend_secret.data.SMTP_FROM_NAME }}
#SMTP_HOST: {{ $backend_secret.data.SMTP_HOST }}
#SMTP_PASSWORD: {{ $backend_secret.data.SMTP_PASSWORD }}
SMTP_PORT: {{ $backend_secret.data.SMTP_PORT }}
#SMTP_SECURE: {{ $backend_secret.data.SMTP_SECURE }}
#SMTP_USERNAME: {{ $backend_secret.data.SMTP_USERNAME }}
SECRET_KEY: {{ $backend_secret.data.SECRET_KEY }}
{{ else -}}
stringData:
ENCRYPTION_KEY: "b318446cc6cd8ac7159ccc8245b32be5"
#INVITE_ONLY_SIGNUP: ""
JWT_AUTH_SECRET: {{ randAlphaNum 32 | lower }}
JWT_MFA_SECRET: {{ randAlphaNum 32 | lower }}
JWT_REFRESH_SECRET: {{ randAlphaNum 32 | lower }}
JWT_SERVICE_SECRET: {{ randAlphaNum 32 | lower }}
JWT_SIGNUP_SECRET: {{ randAlphaNum 32 | lower }}
SITE_URL: "infisical.local"
#SMTP_FROM_ADDRESS: ""
SMTP_FROM_NAME: "Infisical"
#SMTP_HOST: ""
#SMTP_PASSWORD: ""
SMTP_PORT: "587"
#SMTP_SECURE: ""
#SMTP_USERNAME: ""
SECRET_KEY: {{ randAlphaNum 32 | lower }}
{{ end }}
---
apiVersion: v1
kind: Secret
metadata:
name: infisical-frontend
namespace: {{ .Release.Namespace }}
type: Opaque
stringData:
SITE_URL: "infisical.local"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: infisical-nginx-conf
namespace: {{ .Release.Namespace }}
data:
nginx.conf: |
worker_processes 2;
events {}
http {
server {
listen 8088;
location /api {
proxy_set_header X-Real-RIP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:4000; # for backend
proxy_redirect off;
# proxy_cookie_path / "/; secure; HttpOnly; SameSite=strict";
proxy_cookie_path / "/; HttpOnly; SameSite=strict";
}
location /tapr {
proxy_set_header X-Real-RIP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:8080; # for tapr
proxy_redirect off;
# proxy_cookie_path / "/; secure; HttpOnly; SameSite=strict";
proxy_cookie_path / "/; HttpOnly; SameSite=strict";
}
location / {
include /etc/nginx/mime.types;
}
}
}
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: SysEventRegistry
metadata:
name: infisical-user-create-cb
namespace: {{ .Release.Namespace }}
spec:
type: subscriber
event: user.create
callback: http://infisical-service.{{ .Release.Namespace }}:8080/user/create
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: SysEventRegistry
metadata:
name: infisical-user-delete-cb
namespace: {{ .Release.Namespace }}
spec:
type: subscriber
event: user.delete
callback: http://infisical-service.{{ .Release.Namespace }}:8080/user/delete

View File

@@ -1,340 +1,3 @@
{{- $postgres_secret := (lookup "v1" "Secret" .Release.Namespace "infisical-postgres") -}}
{{- $backend_secret := (lookup "v1" "Secret" .Release.Namespace "infisical-backend") -}}
{{- $postgres_password := randAlphaNum 16 | b64enc -}}
{{- $redis_password := randAlphaNum 16 | b64enc -}}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ .Release.Namespace }}:vault-role
namespace: {{ .Release.Namespace }}
rules:
- apiGroups:
- "*"
resources:
- secrets
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: user-system-{{ .Values.bfl.username }}:vault-role:app
namespace: user-system-{{ .Values.bfl.username }}
rules:
- apiGroups:
- "*"
resources:
- applicationpermissions
verbs:
- get
- list
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: infisical-sa
namespace: {{ .Release.Namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ .Release.Namespace }}:vault-rb
namespace: {{ .Release.Namespace }}
subjects:
- kind: ServiceAccount
namespace: {{ .Release.Namespace }}
name: infisical-sa
roleRef:
kind: Role
name: {{ .Release.Namespace }}:vault-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: user-system-{{ .Values.bfl.username }}:vault-rb:app
namespace: user-system-{{ .Values.bfl.username }}
subjects:
- kind: ServiceAccount
namespace: {{ .Release.Namespace }}
name: infisical-sa
roleRef:
kind: Role
name: user-system-{{ .Values.bfl.username }}:vault-role:app
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ .Release.Namespace }}:vault-ro-user-rb
subjects:
- kind: ServiceAccount
namespace: {{ .Release.Namespace }}
name: infisical-sa
roleRef:
kind: ClusterRole
name: tapr-images-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Secret
metadata:
name: infisical-postgres
namespace: {{ .Release.Namespace }}
type: Opaque
{{ if $postgres_secret -}}
data:
postgres-passwords: {{ index $postgres_secret "data" "postgres-passwords" }}
redis-passwords: {{ index $postgres_secret "data" "redis-passwords" }}
{{ else -}}
data:
postgres-passwords: {{ $postgres_password }}
redis-passwords: {{ $redis_password }}
{{ end }}
---
apiVersion: v1
kind: Secret
metadata:
name: infisical-postgres
namespace: user-system-{{ .Values.bfl.username }}
type: Opaque
{{ if $postgres_secret -}}
data:
postgres-passwords: {{ index $postgres_secret "data" "postgres-passwords" }}
redis-passwords: {{ index $postgres_secret "data" "redis-passwords" }}
{{ else -}}
data:
postgres-passwords: {{ $postgres_password }}
redis-passwords: {{ $redis_password }}
{{ end }}
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: infisical-postgres
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: infisical
appNamespace: {{ .Release.Namespace }}
middleware: postgres
postgreSQL:
user: infisical_{{ .Values.bfl.username }}
password:
valueFrom:
secretKeyRef:
key: postgres-passwords
name: infisical-postgres
databases:
- name: infisical
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: infisical-redis
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: infisical
appNamespace: {{ .Release.Namespace }}
middleware: redis
redis:
password:
valueFrom:
secretKeyRef:
key: redis-passwords
name: infisical-postgres
namespace: infisical
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: infisical-deployment
namespace: {{ .Release.Namespace }}
labels:
app: infisical
applications.app.bytetrade.io/author: bytetrade.io
{{ if (eq .Values.debugVersion true) }}
applications.app.bytetrade.io/name: infisical
applications.app.bytetrade.io/owner: '{{ .Values.bfl.username }}'
annotations:
applications.app.bytetrade.io/icon: https://bookface-images.s3.amazonaws.com/small_logos/621cb43ec50d1aae545391abcc114014c84d295f.png
applications.app.bytetrade.io/title: Infisical
applications.app.bytetrade.io/version: '0.0.1'
applications.app.bytetrade.io/entrances: '[{"name":"infisical", "host":"infisical-service", "port":80,"title":"Infisical"}]'
{{ end }}
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: infisical
template:
metadata:
labels:
app: infisical
io.bytetrade.app: "true"
spec:
serviceAccountName: infisical-sa
initContainers:
- name: init-container
image: 'postgres:16.0-alpine3.18'
command:
- sh
- '-c'
- >-
echo -e "Checking for the availability of PostgreSQL Server deployment"; until psql -h $PGHOST -p $PGPORT -U $PGUSER -d $PGDB -c "SELECT 1"; do sleep 1; printf "-"; done; sleep 5; echo -e " >> PostgreSQL DB Server has started";
env:
- name: PGHOST
value: citus-master-svc.user-system-{{ .Values.bfl.username }}
- name: PGPORT
value: "5432"
- name: PGUSER
value: infisical_{{ .Values.bfl.username }}
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: infisical-postgres
key: postgres-passwords
- name: PGDB
value: user_space_{{ .Values.bfl.username }}_infisical
- name: "migration-init"
image: "beclab/infisical:0.1.1"
imagePullPolicy: IfNotPresent
command: ["npm", "run", "migration:latest"]
envFrom:
- secretRef:
name: infisical-backend
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: infisical-postgres
key: postgres-passwords
- name: POSTGRES_USER
value: infisical_{{ .Values.bfl.username }}
- name: POSTGRES_DB
value: user_space_{{ .Values.bfl.username }}_infisical
- name: DB_CONNECTION_URI
value: "postgres://$(POSTGRES_USER):$(POSTGRES_PASSWORD)@citus-master-svc.user-system-{{ .Values.bfl.username }}/$(POSTGRES_DB)?sslmode=disable"
containers:
- name: infisical
image: "beclab/infisical:0.1.1"
imagePullPolicy: IfNotPresent
readinessProbe:
httpGet:
path: /api/status
port: 4000
initialDelaySeconds: 10
periodSeconds: 10
ports:
- containerPort: 4000
envFrom:
- secretRef:
name: infisical-backend
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: infisical-postgres
key: postgres-passwords
- name: POSTGRES_USER
value: infisical_{{ .Values.bfl.username }}
- name: POSTGRES_DB
value: user_space_{{ .Values.bfl.username }}_infisical
- name: DB_CONNECTION_URI
value: "postgres://$(POSTGRES_USER):$(POSTGRES_PASSWORD)@citus-master-svc.user-system-{{ .Values.bfl.username }}/$(POSTGRES_DB)?sslmode=disable"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: infisical-postgres
key: redis-passwords
- name: REDIS_URL
value: "redis://:$(REDIS_PASSWORD)@redis-cluster-proxy.user-system-{{ .Values.bfl.username }}:6379/0"
{{ if (eq .Values.debugVersion true) }}
- name: infisical-frontend
image: beclab/infisical-frontend:0.1.1
imagePullPolicy: IfNotPresent
readinessProbe:
httpGet:
path: /
port: 3000
initialDelaySeconds: 10
periodSeconds: 10
envFrom:
- secretRef:
name: infisical-frontend
ports:
- containerPort: 3000
{{ end }}
- name: infisical-proxy
image: nginx:stable-alpine3.17-slim
imagePullPolicy: IfNotPresent
ports:
- name: proxy
containerPort: 8088
volumeMounts:
- name: nginx-config
readOnly: true
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
- name: tapr-sidecar
image: beclab/secret-vault:0.1.6
imagePullPolicy: IfNotPresent
ports:
- name: proxy
containerPort: 8080
env:
- name: OWNER
value: '{{ .Values.bfl.username }}'
- name: PG_USER
value: infisical_{{ .Values.bfl.username }}
- name: PG_DB
value: user_space_{{ .Values.bfl.username }}_infisical
- name: PG_ADDR
value: citus-master-svc.user-system-{{ .Values.bfl.username }}
- name: PASSWORD
valueFrom:
secretKeyRef:
name: infisical-backend
key: SECRET_KEY
volumes:
- name: nginx-config
configMap:
name: infisical-nginx-conf
items:
- key: nginx.conf
path: nginx.conf
---
apiVersion: v1
kind: Service
@@ -342,140 +5,13 @@ metadata:
name: infisical-service
namespace: {{ .Release.Namespace }}
spec:
selector:
app: infisical
type: ClusterIP
type: ExternalName
externalName: infisical-service.os-system.svc.cluster.local
ports:
- protocol: TCP
name: infisical
port: 80
targetPort: 8088
- protocol: TCP
name: sidecar
port: 8080
targetPort: 8080
---
apiVersion: v1
kind: Secret
metadata:
name: infisical-backend
namespace: {{ .Release.Namespace }}
type: Opaque
{{ if $backend_secret -}}
data:
ENCRYPTION_KEY: {{ $backend_secret.data.ENCRYPTION_KEY }}
#INVITE_ONLY_SIGNUP: {{ $backend_secret.data.INVITE_ONLY_SIGNUP }}
JWT_AUTH_SECRET: {{ $backend_secret.data.JWT_AUTH_SECRET }}
JWT_MFA_SECRET: {{ $backend_secret.data.JWT_MFA_SECRET }}
JWT_REFRESH_SECRET: {{ $backend_secret.data.JWT_REFRESH_SECRET }}
JWT_SERVICE_SECRET: {{ $backend_secret.data.JWT_SERVICE_SECRET }}
JWT_SIGNUP_SECRET: {{ $backend_secret.data.JWT_SIGNUP_SECRET }}
SITE_URL: {{ $backend_secret.data.SITE_URL }}
#SMTP_FROM_ADDRESS: {{ $backend_secret.data.SMTP_FROM_ADDRESS }}
SMTP_FROM_NAME: {{ $backend_secret.data.SMTP_FROM_NAME }}
#SMTP_HOST: {{ $backend_secret.data.SMTP_HOST }}
#SMTP_PASSWORD: {{ $backend_secret.data.SMTP_PASSWORD }}
SMTP_PORT: {{ $backend_secret.data.SMTP_PORT }}
#SMTP_SECURE: {{ $backend_secret.data.SMTP_SECURE }}
#SMTP_USERNAME: {{ $backend_secret.data.SMTP_USERNAME }}
SECRET_KEY: {{ $backend_secret.data.SECRET_KEY }}
{{ else -}}
stringData:
ENCRYPTION_KEY: "b318446cc6cd8ac7159ccc8245b32be5"
#INVITE_ONLY_SIGNUP: ""
JWT_AUTH_SECRET: {{ randAlphaNum 32 | lower }}
JWT_MFA_SECRET: {{ randAlphaNum 32 | lower }}
JWT_REFRESH_SECRET: {{ randAlphaNum 32 | lower }}
JWT_SERVICE_SECRET: {{ randAlphaNum 32 | lower }}
JWT_SIGNUP_SECRET: {{ randAlphaNum 32 | lower }}
SITE_URL: "infisical.local"
#SMTP_FROM_ADDRESS: ""
SMTP_FROM_NAME: "Infisical"
#SMTP_HOST: ""
#SMTP_PASSWORD: ""
SMTP_PORT: "587"
#SMTP_SECURE: ""
#SMTP_USERNAME: ""
SECRET_KEY: {{ randAlphaNum 32 | lower }}
{{ end }}
---
apiVersion: v1
kind: Secret
metadata:
name: infisical-frontend
namespace: {{ .Release.Namespace }}
type: Opaque
stringData:
SITE_URL: "infisical.local"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: infisical-nginx-conf
namespace: {{ .Release.Namespace }}
data:
nginx.conf: |
worker_processes 2;
events {}
http {
server {
listen 8088;
location /api {
proxy_set_header X-Real-RIP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:4000; # for backend
proxy_redirect off;
# proxy_cookie_path / "/; secure; HttpOnly; SameSite=strict";
proxy_cookie_path / "/; HttpOnly; SameSite=strict";
}
location /tapr {
proxy_set_header X-Real-RIP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:8080; # for tapr
proxy_redirect off;
# proxy_cookie_path / "/; secure; HttpOnly; SameSite=strict";
proxy_cookie_path / "/; HttpOnly; SameSite=strict";
}
location / {
include /etc/nginx/mime.types;
{{ if (eq .Values.debugVersion true) }}
proxy_set_header X-Real-RIP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://localhost:3000; # for frontend
proxy_redirect off;
{{ end }}
}
}
}
- name: http
port: 8080
protocol: TCP
targetPort: 8080
---
apiVersion: sys.bytetrade.io/v1alpha1

3
third-party/opentelemetry/README.md vendored Normal file
View File

@@ -0,0 +1,3 @@
# OpenTelemetry
https://github.com/open-telemetry

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,966 @@
---
# Source: opentelemetry-operator/templates/admission-webhooks/operator-webhook.yaml
apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
annotations:
"helm.sh/hook": "pre-install,pre-upgrade"
"helm.sh/hook-weight": "-10"
"helm.sh/hook-delete-policy": "before-hook-creation"
labels:
helm.sh/chart: opentelemetry-operator-0.81.0
app.kubernetes.io/name: opentelemetry-operator
app.kubernetes.io/version: "0.118.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: otel
app.kubernetes.io/component: webhook
name: otel-opentelemetry-operator-controller-manager-service-cert
namespace: os-system
data:
tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURyVENDQXBXZ0F3SUJBZ0lSQUtNbllGaXU2M3dnZU4vUGZXS0k3dHN3RFFZSktvWklodmNOQVFFTEJRQXcKTFRFck1Da0dBMVVFQXhNaWIzQmxiblJsYkdWdFpYUnllUzF2Y0dWeVlYUnZjaTF2Y0dWeVlYUnZjaTFqWVRBZQpGdzB5TlRBek1ETXdPVFExTWpCYUZ3MHlOakF6TURNd09UUTFNakJhTUNZeEpEQWlCZ05WQkFNVEcyOTBaV3d0CmIzQmxiblJsYkdWdFpYUnllUzF2Y0dWeVlYUnZjakNDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0MKQVFvQ2dnRUJBTDB6RjdXR0ZPSVNjeEpxdzJ4OGlrMXZiOWxTNEk5OXpPRzhiQWVwNzM5VTE3Tk1SR1NEQ0U0ZApzQXFpNlpQeDVzNGFCSzU3MXYraDdCcnVLaUlJeGU5OEg1OTRITkJzL2tKQ0ViU1Q5cTZqYnR3S1FTOTR4RWpQCk1yUndpQXlkVUlSOVV6QWJvQXFtRVBSRGJsaDRwY0p6QXBPQVhINk8ySG05OHpZeHNIdktRWEkrckt5SUxoenUKLzV4Lzk1NjdhNW0rRHVoOWNuT0Vtek5UZkpUeG1tQTRZWFVJaGk1NDdZOGFGc0ZheWhBTnpUcVlKV041bklCbwpTcWJURjJzeFd4NDZVdGJQUFFkUkxxdEI2VWtvVlRSdXRaemF3dXh5OERUbm1ITDlYemFtaWRhZmdmRC9oNThHCnlMd1NocStrMzVyWk1YVERJTG9zNE1lc1BHbjZoQUVDQXdFQUFhT0J6akNCeXpBT0JnTlZIUThCQWY4RUJBTUMKQmFBd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUdDQ3NHQVFVRkJ3TUNNQXdHQTFVZEV3RUIvd1FDTUFBdwpId1lEVlIwakJCZ3dGb0FVTlNGVUMrQ0RqdWgvL3h6MzNtL09ScVlhckpNd2F3WURWUjBSQkdRd1lvSXRiM1JsCmJDMXZjR1Z1ZEdWc1pXMWxkSEo1TFc5d1pYSmhkRzl5TFhkbFltaHZiMnN1YjNNdGMzbHpkR1Z0Z2pGdmRHVnMKTFc5d1pXNTBaV3hsYldWMGNua3RiM0JsY21GMGIzSXRkMlZpYUc5dmF5NXZjeTF6ZVhOMFpXMHVjM1pqTUEwRwpDU3FHU0liM0RRRUJDd1VBQTRJQkFRQnY0cVBxSFdWeVZrVTk4ZWxjYVQ3aTJoY0dWRFRPSkJOZUkzZTUyaDlQCjlZOEkvQXJEWmR4Wmw3YnF4Zy9XdGFPNDRjUjMrWi9pTTZGS0F5WEZvd0tGSXpVMTZ6ZjNmRGFDQStkY2ppR1IKcHpEaEtjbEREM0cyTExxNFRlRkdKWVdWaTAzTG5ub1hjU0xCQXNnZDhhZHRQRUpsSWxyMFVUaFp1TDFkTndZeApwQ3VKZkVIcUVEdjNyRHpGdk0ySzlLTW1yNE11UG4rK083eStxK2JER3dxRG8yOG5WaTI0NGNJbjFVcENndDVnCjZHQmxQbzU5a3FzRTJRR01zYWRWQlpKRjQzamY3RUZ4LzF1dGlNZlBWaTRvc2YwMjIzK2xBY3h1RERuM1lFeXQKdXdiUTR3TnFuUjl0VUZtbWpyQXFZOXY3cWZ6WjMxeFdSVXAxbG1vU2pOTzgKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdlRNWHRZWVU0aEp6RW1yRGJIeUtUVzl2MlZMZ2ozM000YnhzQjZudmYxVFhzMHhFClpJTUlUaDJ3Q3FMcGsvSG16aG9Fcm52Vy82SHNHdTRxSWdqRjczd2ZuM2djMEd6K1FrSVJ0SlAycnFOdTNBcEIKTDNqRVNNOHl0SENJREoxUWhIMVRNQnVnQ3FZUTlFTnVXSGlsd25NQ2s0QmNmbzdZZWIzek5qR3dlOHBCY2o2cwpySWd1SE83L25ILzNucnRybWI0TzZIMXljNFNiTTFOOGxQR2FZRGhoZFFpR0xuanRqeG9Xd1ZyS0VBM05PcGdsClkzbWNnR2hLcHRNWGF6RmJIanBTMXM4OUIxRXVxMEhwU1NoVk5HNjFuTnJDN0hMd05PZVljdjFmTnFhSjFwK0IKOFArSG53Ykl2QktHcjZUZm10a3hkTU1ndWl6Z3g2dzhhZnFFQVFJREFRQUJBb0lCQVFDTFovY2pRSDFvcWxGeAprNHNWQjVyY1BhejdRNDdGQzl1SHhNOFF3b3orbEdRdTc1WkJQUzlFWjZtTDNNZ2d3NC9kOHR2YU9OT05IaUViCklydVd3a0llR0tZd1dVOVozOFZreXN2QzgzZEM3SmRGdUtTWk52SGgrQkdUVkl0YVNCdkkwNU1WRW5YVkF0SGgKT1VJNEhBVi9Ca0V5cjlUM1I0ZUlCSWNOT29XZ1gzcTFyUCtqbk9xdldCcUJ1VW5Ec0RNYVg0NjdJYUZrdjZ3RQpVZFZBbHFkaXdDQXY4NWM2SnhRY0Q0QjNtMUMzZGcxc1pDZ0gzRUt6UzQycjJnUE9EUjJPa3oxQXVMYkt5WDhICnVib3FuQUN6ZVhQWDBDVElEZ1JlR2huTGZ6a0ZLL09DekNCdTZpTFh5UVVCZTdPQkd6dk9RV25wNFVaQnBhY0UKaGk0dWJmNkZBb0dCQVB0SnZ5TjhrV3p5VGlxVUtKQUcrZzcyQkNlUlFrNytoaHJJRVZUYVpwWW9lMEJjcWlmNAo5T3BZUjJXeEhSSkYyak1CNXliTWRhalVDUTRxS2NWUTd4RjA3V3AwUG1zUlR1QU1NakMyeUp5VlJUWWRZSkhhCmlDejZidWpzWlMyRkFQdmJMSnorZU1ISVBtait0UlNLd00zdWlMcE9VaXAzbVdEaEFlYVhyWDNmQW9HQkFNQy8KVGNPNDZ1enZBUjREUk9FQmcyL3Bsd3ZieVNENVUrU0wvWHBHVjRJUTNGMlVJRUxRRE45YXlqVGpTVWRTVDdINgo0OFRxYlNVVVhsY3NFTSs2MXRYZjhYUTNJRWR0VTVBWHRsaFNvN0JnSUtIa1ltN1IzeWt2UUV1SVdXUU15MGY2CmR2UTRkRk0rZVVzOGlYS2JPYU8rNVFWUkR1N0tjUGtveTA1cGhIb2ZBb0dCQUpqaGhTaFI0U2duUXVja3ZJamoKdGI3a2JpS2tmWE1SNXdUa002Y3NPTDJpWTFvRkJvRExOalpjL3hNZmJsQnZyeERaVjRpRFhCSWE3bWR2djNvTQpnMlpiZlJZSEl2S0ExVHY2TDQ3enBabWVOejExSWd1YXFMNjBua1dYalBia3RIU0dJOTVmODVmeC9BWms4RVpQCkpINGxZWW4zbklXNXZkYnpEZlQ4MHRDQkFvR0FQYmxGY3ViMXZGZ3hXR1lkbVp4OWRjb3RCNndqZFg3Q2djN2UKcGxoK1QzV016QjdTVWZNRUhFYWJ1R0lNcGwxU2ppRlU3VXRSRm0yMlpGNEZLRENoK2EyVVNlNFpWU1pLZXp0TQp0bTRJWTBQMVQwS3V6dVJBZlpUWEZ4a2IzekZGcTlBbVpjRHFaM243SjcvRUdFSmpLKy9Hc1hRcXZ3ZHZOQ3IvCktDWWNPV3NDZ1lBQXpiNGwwcGxLT3VVSEduY1FrZW9HN1FOTC9WbXRGY1htOS94OFBJN1Q3dmhBQ2RaYWNoZ1EKUmM2WU83NVUrQzB0ZFVXZDdreUFBakM4ejBsY2xmOGptUWs0akZZUGJoQWp4YVdubFB2Tk1jS0JhMURiS002VQp0WFY4bHVSWXMzcDhqVm9OOVRCYjBiSkVCQmNjanl0Z0s0eTU0TUNxK0NaQUNnTDBvQjc1MUE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURSakNDQWk2Z0F3SUJBZ0lSQU1ubGpoOHJPMUtUOGR2VndCM2dKSXd3RFFZSktvWklodmNOQVFFTEJRQXcKTFRFck1Da0dBMVVFQXhNaWIzQmxiblJsYkdWdFpYUnllUzF2Y0dWeVlYUnZjaTF2Y0dWeVlYUnZjaTFqWVRBZQpGdzB5TlRBek1ETXdPVFExTWpCYUZ3MHlOakF6TURNd09UUTFNakJhTUMweEt6QXBCZ05WQkFNVEltOXdaVzUwClpXeGxiV1YwY25rdGIzQmxjbUYwYjNJdGIzQmxjbUYwYjNJdFkyRXdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUEKQTRJQkR3QXdnZ0VLQW9JQkFRRFRKcW5rVjFLM1NHN1hNR3RJVGI4N0hLdlk1bUM0OXNRVStYajNrTlAwM1JLbQowWVA3TjZ0TDB3Qnd3bEFJSTZYL0k2ZFV4Kzlaays0YXM1RDhjanhtenhMSnlaNi8xbm84dWlMdGFrbEtuTk9yCm1EVjBmSWZVK0l6QVQ0Rno2UHRUc3dMaXR3ZjZFbkFJV0szTWQxdTZUVlhQUFJQeThkUVVTTktMUU5hVnJmMTIKZWN5elgwWkw0Q00rS0ZNWUpRUStXVnczanpzanAvWjUzdDRXR3JyQVk0bVZNMnNwS3p5WndlbmpIY25pbDhIOQpxUzg3YmtyQUE3dGpxb3RHRkxVTVY0eUpjd2NzSXN1VEpRM0F5TkZFMEsrTDBXZWRiZDhUemVoeFJNWlhuRFd2Ckd3bEppNlp0ejJ2OWZXT3BWZDkzb1NDVXhPWTcyMWNRQ1NVT0FYdWRBZ01CQUFHallUQmZNQTRHQTFVZER3RUIKL3dRRUF3SUNwREFkQmdOVkhTVUVGakFVQmdnckJnRUZCUWNEQVFZSUt3WUJCUVVIQXdJd0R3WURWUjBUQVFILwpCQVV3QXdFQi96QWRCZ05WSFE0RUZnUVVOU0ZVQytDRGp1aC8veHozM20vT1JxWWFySk13RFFZSktvWklodmNOCkFRRUxCUUFEZ2dFQkFLb2kxdTJLV0lIRDhBTy9xTlhJVS9lRy81dGY5THBWaFBESkM4U1dnc0x1d0FDZlVva1UKUkFNT3puRXA1OHVEK2JobVBxV1NpeG1jVFo4eTlmSWg5cUQ5d2hKSnFzV0pBazZoQXlnRGRJZVpSNFpPelVmdwo5Wmc4Sm9SWkJvMzdzbDVYZVNRNDV3Y2E4RmN6WVpNMmJrSlhiNGxEak9iSitRVkU2dklORTRtNjlvVlhCUzJuCmg1a0t6UFhleDRzczd6NVNXd0Q0MG42QXVZaWxCeFZvdlQxQkZwcCtkNldFcDJhYUhlRDZPaXJQWmVzSzRYTE8KSHA0aWFmejdHU1pPZVlWVWpHTlBsTk1tc24yekhaVDJjVk9yMHpUeStoV0k1RHdTVmZCejFNRUIzRmw2L200LwpVcEZIMXZ5VmtpcnpBRzBYekhDd1EyY1NMR2h1NEhadlU4bz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
---
# Source: opentelemetry-operator/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: opentelemetry-operator
namespace: os-system
labels:
helm.sh/chart: opentelemetry-operator-0.81.0
app.kubernetes.io/name: opentelemetry-operator
app.kubernetes.io/version: "0.118.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: otel
app.kubernetes.io/component: controller-manager
annotations:
"helm.sh/hook": "pre-install,pre-upgrade"
"helm.sh/hook-weight": "-5"
---
# Source: opentelemetry-operator/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
helm.sh/chart: opentelemetry-operator-0.81.0
app.kubernetes.io/name: opentelemetry-operator
app.kubernetes.io/version: "0.118.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: otel
app.kubernetes.io/component: controller-manager
annotations:
"helm.sh/hook": "pre-install,pre-upgrade"
"helm.sh/hook-weight": "-5"
name: otel-opentelemetry-operator-manager
rules:
- apiGroups:
- ""
resources:
- configmaps
- persistentvolumeclaims
- persistentvolumes
- pods
- serviceaccounts
- services
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- ""
resources:
- namespaces
verbs:
- list
- watch
- apiGroups:
- apps
resources:
- daemonsets
- deployments
- statefulsets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- apps
- extensions
resources:
- replicasets
verbs:
- get
- list
- watch
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- batch
resources:
- jobs
verbs:
- get
- list
- watch
- apiGroups:
- config.openshift.io
resources:
- infrastructures
- infrastructures/status
verbs:
- get
- list
- watch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- get
- list
- update
- apiGroups:
- monitoring.coreos.com
resources:
- podmonitors
- servicemonitors
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- opentelemetry.io
resources:
- instrumentations
verbs:
- get
- list
- patch
- update
- watch
- apiGroups:
- opentelemetry.io
resources:
- opampbridges
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- opentelemetry.io
resources:
- opampbridges/finalizers
verbs:
- update
- apiGroups:
- opentelemetry.io
resources:
- opampbridges/status
verbs:
- get
- patch
- update
- apiGroups:
- opentelemetry.io
resources:
- opentelemetrycollectors
verbs:
- get
- list
- patch
- update
- watch
- apiGroups:
- opentelemetry.io
resources:
- opentelemetrycollectors/finalizers
verbs:
- get
- patch
- update
- apiGroups:
- opentelemetry.io
resources:
- opentelemetrycollectors/status
verbs:
- get
- patch
- update
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- route.openshift.io
resources:
- routes
- routes/custom-host
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- opentelemetry.io
resources:
- targetallocators
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- opentelemetry.io
resources:
- targetallocators/status
verbs:
- get
- patch
- update
- apiGroups:
- cert-manager.io
resources:
- issuers
- certificaterequests
- certificates
verbs:
- create
- get
- list
- watch
- update
- patch
- delete
---
# Source: opentelemetry-operator/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
helm.sh/chart: opentelemetry-operator-0.81.0
app.kubernetes.io/name: opentelemetry-operator
app.kubernetes.io/version: "0.118.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: otel
app.kubernetes.io/component: controller-manager
annotations:
"helm.sh/hook": "pre-install,pre-upgrade"
"helm.sh/hook-weight": "5"
name: otel-opentelemetry-operator-metrics
rules:
- nonResourceURLs:
- /metrics
verbs:
- get
---
# Source: opentelemetry-operator/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
helm.sh/chart: opentelemetry-operator-0.81.0
app.kubernetes.io/name: opentelemetry-operator
app.kubernetes.io/version: "0.118.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: otel
app.kubernetes.io/component: controller-manager
annotations:
"helm.sh/hook": "pre-install,pre-upgrade"
"helm.sh/hook-weight": "-5"
name: otel-opentelemetry-operator-proxy
rules:
- apiGroups:
- authentication.k8s.io
resources:
- tokenreviews
verbs:
- create
- apiGroups:
- authorization.k8s.io
resources:
- subjectaccessreviews
verbs:
- create
---
# Source: opentelemetry-operator/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
helm.sh/chart: opentelemetry-operator-0.81.0
app.kubernetes.io/name: opentelemetry-operator
app.kubernetes.io/version: "0.118.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: otel
app.kubernetes.io/component: controller-manager
annotations:
"helm.sh/hook": "pre-install,pre-upgrade"
"helm.sh/hook-weight": "-5"
name: otel-opentelemetry-operator-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: otel-opentelemetry-operator-manager
subjects:
- kind: ServiceAccount
name: opentelemetry-operator
namespace: os-system
---
# Source: opentelemetry-operator/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
helm.sh/chart: opentelemetry-operator-0.81.0
app.kubernetes.io/name: opentelemetry-operator
app.kubernetes.io/version: "0.118.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: otel
app.kubernetes.io/component: controller-manager
annotations:
"helm.sh/hook": "pre-install,pre-upgrade"
"helm.sh/hook-weight": "-5"
name: otel-opentelemetry-operator-proxy
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: otel-opentelemetry-operator-proxy
subjects:
- kind: ServiceAccount
name: opentelemetry-operator
namespace: os-system
---
# Source: opentelemetry-operator/templates/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
helm.sh/chart: opentelemetry-operator-0.81.0
app.kubernetes.io/name: opentelemetry-operator
app.kubernetes.io/version: "0.118.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: otel
app.kubernetes.io/component: controller-manager
annotations:
"helm.sh/hook": "pre-install,pre-upgrade"
"helm.sh/hook-weight": "-5"
name: otel-opentelemetry-operator-leader-election
namespace: os-system
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- ""
resources:
- configmaps/status
verbs:
- get
- update
- patch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
---
# Source: opentelemetry-operator/templates/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
helm.sh/chart: opentelemetry-operator-0.81.0
app.kubernetes.io/name: opentelemetry-operator
app.kubernetes.io/version: "0.118.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: otel
app.kubernetes.io/component: controller-manager
annotations:
"helm.sh/hook": "pre-install,pre-upgrade"
"helm.sh/hook-weight": "-5"
name: otel-opentelemetry-operator-leader-election
namespace: os-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: otel-opentelemetry-operator-leader-election
subjects:
- kind: ServiceAccount
name: opentelemetry-operator
namespace: os-system
---
# Source: opentelemetry-operator/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: opentelemetry-operator-0.81.0
app.kubernetes.io/name: opentelemetry-operator
app.kubernetes.io/version: "0.118.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: otel
app.kubernetes.io/component: controller-manager
name: otel-opentelemetry-operator
namespace: os-system
spec:
ports:
- name: https
port: 8443
protocol: TCP
targetPort: https
- name: metrics
port: 8080
protocol: TCP
targetPort: metrics
selector:
app.kubernetes.io/name: opentelemetry-operator
app.kubernetes.io/component: controller-manager
---
# Source: opentelemetry-operator/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: opentelemetry-operator-0.81.0
app.kubernetes.io/name: opentelemetry-operator
app.kubernetes.io/version: "0.118.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: otel
app.kubernetes.io/component: controller-manager
name: otel-opentelemetry-operator-webhook
namespace: os-system
spec:
ports:
- port: 443
protocol: TCP
targetPort: webhook-server
selector:
app.kubernetes.io/name: opentelemetry-operator
app.kubernetes.io/component: controller-manager
---
# Source: opentelemetry-operator/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: opentelemetry-operator-0.81.0
app.kubernetes.io/name: opentelemetry-operator
app.kubernetes.io/version: "0.118.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: otel
app.kubernetes.io/component: controller-manager
applications.app.bytetrade.io/author: bytetrade.io
annotations:
"helm.sh/hook": "pre-install,pre-upgrade"
"helm.sh/hook-weight": "-5"
name: otel-opentelemetry-operator
namespace: os-system
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: opentelemetry-operator
app.kubernetes.io/component: controller-manager
template:
metadata:
annotations:
kubectl.kubernetes.io/default-container: manager
labels:
app.kubernetes.io/name: opentelemetry-operator
app.kubernetes.io/component: controller-manager
spec:
containers:
- args:
- --metrics-addr=0.0.0.0:8080
- --enable-leader-election
- --health-probe-addr=:8081
- --webhook-port=9443
- --enable-nginx-instrumentation
- --enable-go-instrumentation
- --collector-image=otel/opentelemetry-collector-k8s:0.118.0
command:
- /manager
env:
- name: ENABLE_WEBHOOKS
value: "true"
image: "ghcr.io/open-telemetry/opentelemetry-operator/opentelemetry-operator:0.118.0"
name: manager
ports:
- containerPort: 8080
name: metrics
protocol: TCP
- containerPort: 9443
name: webhook-server
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 8081
initialDelaySeconds: 15
periodSeconds: 20
readinessProbe:
httpGet:
path: /readyz
port: 8081
initialDelaySeconds: 5
periodSeconds: 10
resources:
{}
volumeMounts:
- mountPath: /tmp/k8s-webhook-server/serving-certs
name: cert
readOnly: true
- args:
- --secure-listen-address=0.0.0.0:8443
- --upstream=http://127.0.0.1:8080/
- --v=0
image: "quay.io/brancz/kube-rbac-proxy:v0.18.1"
name: kube-rbac-proxy
ports:
- containerPort: 8443
name: https
protocol: TCP
serviceAccountName: opentelemetry-operator
terminationGracePeriodSeconds: 10
volumes:
- name: cert
secret:
defaultMode: 420
secretName: otel-opentelemetry-operator-controller-manager-service-cert
securityContext:
fsGroup: 65532
runAsGroup: 65532
runAsNonRoot: true
runAsUser: 65532
---
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Release.Name }}"
labels:
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: otel
app.kubernetes.io/version: "0.118.0"
annotations:
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-10"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: wait-otel-controller
labels:
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: otel
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: "alpine:3.18"
command: ["/bin/sleep","30"]
---
# Source: opentelemetry-operator/templates/admission-webhooks/operator-webhook.yaml
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
labels:
helm.sh/chart: opentelemetry-operator-0.81.0
app.kubernetes.io/name: opentelemetry-operator
app.kubernetes.io/version: "0.118.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: otel
app.kubernetes.io/component: webhook
annotations:
"helm.sh/hook": "post-install"
"helm.sh/hook-weight": "-5"
name: otel-opentelemetry-operator-mutation
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURSakNDQWk2Z0F3SUJBZ0lSQU1ubGpoOHJPMUtUOGR2VndCM2dKSXd3RFFZSktvWklodmNOQVFFTEJRQXcKTFRFck1Da0dBMVVFQXhNaWIzQmxiblJsYkdWdFpYUnllUzF2Y0dWeVlYUnZjaTF2Y0dWeVlYUnZjaTFqWVRBZQpGdzB5TlRBek1ETXdPVFExTWpCYUZ3MHlOakF6TURNd09UUTFNakJhTUMweEt6QXBCZ05WQkFNVEltOXdaVzUwClpXeGxiV1YwY25rdGIzQmxjbUYwYjNJdGIzQmxjbUYwYjNJdFkyRXdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUEKQTRJQkR3QXdnZ0VLQW9JQkFRRFRKcW5rVjFLM1NHN1hNR3RJVGI4N0hLdlk1bUM0OXNRVStYajNrTlAwM1JLbQowWVA3TjZ0TDB3Qnd3bEFJSTZYL0k2ZFV4Kzlaays0YXM1RDhjanhtenhMSnlaNi8xbm84dWlMdGFrbEtuTk9yCm1EVjBmSWZVK0l6QVQ0Rno2UHRUc3dMaXR3ZjZFbkFJV0szTWQxdTZUVlhQUFJQeThkUVVTTktMUU5hVnJmMTIKZWN5elgwWkw0Q00rS0ZNWUpRUStXVnczanpzanAvWjUzdDRXR3JyQVk0bVZNMnNwS3p5WndlbmpIY25pbDhIOQpxUzg3YmtyQUE3dGpxb3RHRkxVTVY0eUpjd2NzSXN1VEpRM0F5TkZFMEsrTDBXZWRiZDhUemVoeFJNWlhuRFd2Ckd3bEppNlp0ejJ2OWZXT3BWZDkzb1NDVXhPWTcyMWNRQ1NVT0FYdWRBZ01CQUFHallUQmZNQTRHQTFVZER3RUIKL3dRRUF3SUNwREFkQmdOVkhTVUVGakFVQmdnckJnRUZCUWNEQVFZSUt3WUJCUVVIQXdJd0R3WURWUjBUQVFILwpCQVV3QXdFQi96QWRCZ05WSFE0RUZnUVVOU0ZVQytDRGp1aC8veHozM20vT1JxWWFySk13RFFZSktvWklodmNOCkFRRUxCUUFEZ2dFQkFLb2kxdTJLV0lIRDhBTy9xTlhJVS9lRy81dGY5THBWaFBESkM4U1dnc0x1d0FDZlVva1UKUkFNT3puRXA1OHVEK2JobVBxV1NpeG1jVFo4eTlmSWg5cUQ5d2hKSnFzV0pBazZoQXlnRGRJZVpSNFpPelVmdwo5Wmc4Sm9SWkJvMzdzbDVYZVNRNDV3Y2E4RmN6WVpNMmJrSlhiNGxEak9iSitRVkU2dklORTRtNjlvVlhCUzJuCmg1a0t6UFhleDRzczd6NVNXd0Q0MG42QXVZaWxCeFZvdlQxQkZwcCtkNldFcDJhYUhlRDZPaXJQWmVzSzRYTE8KSHA0aWFmejdHU1pPZVlWVWpHTlBsTk1tc24yekhaVDJjVk9yMHpUeStoV0k1RHdTVmZCejFNRUIzRmw2L200LwpVcEZIMXZ5VmtpcnpBRzBYekhDd1EyY1NMR2h1NEhadlU4bz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
service:
name: otel-opentelemetry-operator-webhook
namespace: os-system
path: /mutate-opentelemetry-io-v1alpha1-instrumentation
port: 443
failurePolicy: Fail
name: minstrumentation.kb.io
rules:
- apiGroups:
- opentelemetry.io
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- instrumentations
scope: Namespaced
sideEffects: None
timeoutSeconds: 10
- admissionReviewVersions:
- v1
clientConfig:
caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURSakNDQWk2Z0F3SUJBZ0lSQU1ubGpoOHJPMUtUOGR2VndCM2dKSXd3RFFZSktvWklodmNOQVFFTEJRQXcKTFRFck1Da0dBMVVFQXhNaWIzQmxiblJsYkdWdFpYUnllUzF2Y0dWeVlYUnZjaTF2Y0dWeVlYUnZjaTFqWVRBZQpGdzB5TlRBek1ETXdPVFExTWpCYUZ3MHlOakF6TURNd09UUTFNakJhTUMweEt6QXBCZ05WQkFNVEltOXdaVzUwClpXeGxiV1YwY25rdGIzQmxjbUYwYjNJdGIzQmxjbUYwYjNJdFkyRXdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUEKQTRJQkR3QXdnZ0VLQW9JQkFRRFRKcW5rVjFLM1NHN1hNR3RJVGI4N0hLdlk1bUM0OXNRVStYajNrTlAwM1JLbQowWVA3TjZ0TDB3Qnd3bEFJSTZYL0k2ZFV4Kzlaays0YXM1RDhjanhtenhMSnlaNi8xbm84dWlMdGFrbEtuTk9yCm1EVjBmSWZVK0l6QVQ0Rno2UHRUc3dMaXR3ZjZFbkFJV0szTWQxdTZUVlhQUFJQeThkUVVTTktMUU5hVnJmMTIKZWN5elgwWkw0Q00rS0ZNWUpRUStXVnczanpzanAvWjUzdDRXR3JyQVk0bVZNMnNwS3p5WndlbmpIY25pbDhIOQpxUzg3YmtyQUE3dGpxb3RHRkxVTVY0eUpjd2NzSXN1VEpRM0F5TkZFMEsrTDBXZWRiZDhUemVoeFJNWlhuRFd2Ckd3bEppNlp0ejJ2OWZXT3BWZDkzb1NDVXhPWTcyMWNRQ1NVT0FYdWRBZ01CQUFHallUQmZNQTRHQTFVZER3RUIKL3dRRUF3SUNwREFkQmdOVkhTVUVGakFVQmdnckJnRUZCUWNEQVFZSUt3WUJCUVVIQXdJd0R3WURWUjBUQVFILwpCQVV3QXdFQi96QWRCZ05WSFE0RUZnUVVOU0ZVQytDRGp1aC8veHozM20vT1JxWWFySk13RFFZSktvWklodmNOCkFRRUxCUUFEZ2dFQkFLb2kxdTJLV0lIRDhBTy9xTlhJVS9lRy81dGY5THBWaFBESkM4U1dnc0x1d0FDZlVva1UKUkFNT3puRXA1OHVEK2JobVBxV1NpeG1jVFo4eTlmSWg5cUQ5d2hKSnFzV0pBazZoQXlnRGRJZVpSNFpPelVmdwo5Wmc4Sm9SWkJvMzdzbDVYZVNRNDV3Y2E4RmN6WVpNMmJrSlhiNGxEak9iSitRVkU2dklORTRtNjlvVlhCUzJuCmg1a0t6UFhleDRzczd6NVNXd0Q0MG42QXVZaWxCeFZvdlQxQkZwcCtkNldFcDJhYUhlRDZPaXJQWmVzSzRYTE8KSHA0aWFmejdHU1pPZVlWVWpHTlBsTk1tc24yekhaVDJjVk9yMHpUeStoV0k1RHdTVmZCejFNRUIzRmw2L200LwpVcEZIMXZ5VmtpcnpBRzBYekhDd1EyY1NMR2h1NEhadlU4bz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
service:
name: otel-opentelemetry-operator-webhook
namespace: os-system
path: /mutate-opentelemetry-io-v1beta1-opentelemetrycollector
port: 443
failurePolicy: Fail
name: mopentelemetrycollectorbeta.kb.io
rules:
- apiGroups:
- opentelemetry.io
apiVersions:
- v1beta1
operations:
- CREATE
- UPDATE
resources:
- opentelemetrycollectors
scope: Namespaced
sideEffects: None
timeoutSeconds: 10
- admissionReviewVersions:
- v1
clientConfig:
caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURSakNDQWk2Z0F3SUJBZ0lSQU1ubGpoOHJPMUtUOGR2VndCM2dKSXd3RFFZSktvWklodmNOQVFFTEJRQXcKTFRFck1Da0dBMVVFQXhNaWIzQmxiblJsYkdWdFpYUnllUzF2Y0dWeVlYUnZjaTF2Y0dWeVlYUnZjaTFqWVRBZQpGdzB5TlRBek1ETXdPVFExTWpCYUZ3MHlOakF6TURNd09UUTFNakJhTUMweEt6QXBCZ05WQkFNVEltOXdaVzUwClpXeGxiV1YwY25rdGIzQmxjbUYwYjNJdGIzQmxjbUYwYjNJdFkyRXdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUEKQTRJQkR3QXdnZ0VLQW9JQkFRRFRKcW5rVjFLM1NHN1hNR3RJVGI4N0hLdlk1bUM0OXNRVStYajNrTlAwM1JLbQowWVA3TjZ0TDB3Qnd3bEFJSTZYL0k2ZFV4Kzlaays0YXM1RDhjanhtenhMSnlaNi8xbm84dWlMdGFrbEtuTk9yCm1EVjBmSWZVK0l6QVQ0Rno2UHRUc3dMaXR3ZjZFbkFJV0szTWQxdTZUVlhQUFJQeThkUVVTTktMUU5hVnJmMTIKZWN5elgwWkw0Q00rS0ZNWUpRUStXVnczanpzanAvWjUzdDRXR3JyQVk0bVZNMnNwS3p5WndlbmpIY25pbDhIOQpxUzg3YmtyQUE3dGpxb3RHRkxVTVY0eUpjd2NzSXN1VEpRM0F5TkZFMEsrTDBXZWRiZDhUemVoeFJNWlhuRFd2Ckd3bEppNlp0ejJ2OWZXT3BWZDkzb1NDVXhPWTcyMWNRQ1NVT0FYdWRBZ01CQUFHallUQmZNQTRHQTFVZER3RUIKL3dRRUF3SUNwREFkQmdOVkhTVUVGakFVQmdnckJnRUZCUWNEQVFZSUt3WUJCUVVIQXdJd0R3WURWUjBUQVFILwpCQVV3QXdFQi96QWRCZ05WSFE0RUZnUVVOU0ZVQytDRGp1aC8veHozM20vT1JxWWFySk13RFFZSktvWklodmNOCkFRRUxCUUFEZ2dFQkFLb2kxdTJLV0lIRDhBTy9xTlhJVS9lRy81dGY5THBWaFBESkM4U1dnc0x1d0FDZlVva1UKUkFNT3puRXA1OHVEK2JobVBxV1NpeG1jVFo4eTlmSWg5cUQ5d2hKSnFzV0pBazZoQXlnRGRJZVpSNFpPelVmdwo5Wmc4Sm9SWkJvMzdzbDVYZVNRNDV3Y2E4RmN6WVpNMmJrSlhiNGxEak9iSitRVkU2dklORTRtNjlvVlhCUzJuCmg1a0t6UFhleDRzczd6NVNXd0Q0MG42QXVZaWxCeFZvdlQxQkZwcCtkNldFcDJhYUhlRDZPaXJQWmVzSzRYTE8KSHA0aWFmejdHU1pPZVlWVWpHTlBsTk1tc24yekhaVDJjVk9yMHpUeStoV0k1RHdTVmZCejFNRUIzRmw2L200LwpVcEZIMXZ5VmtpcnpBRzBYekhDd1EyY1NMR2h1NEhadlU4bz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
service:
name: otel-opentelemetry-operator-webhook
namespace: os-system
path: /mutate-v1-pod
port: 443
failurePolicy: Ignore
name: mpod.kb.io
rules:
- apiGroups:
- ""
apiVersions:
- v1
operations:
- CREATE
resources:
- pods
scope: Namespaced
sideEffects: None
timeoutSeconds: 10
---
# Source: opentelemetry-operator/templates/admission-webhooks/operator-webhook.yaml
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
helm.sh/chart: opentelemetry-operator-0.81.0
app.kubernetes.io/name: opentelemetry-operator
app.kubernetes.io/version: "0.118.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: otel
app.kubernetes.io/component: webhook
annotations:
"helm.sh/hook": "post-install"
"helm.sh/hook-weight": "-5"
name: otel-opentelemetry-operator-validation
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURSakNDQWk2Z0F3SUJBZ0lSQU1ubGpoOHJPMUtUOGR2VndCM2dKSXd3RFFZSktvWklodmNOQVFFTEJRQXcKTFRFck1Da0dBMVVFQXhNaWIzQmxiblJsYkdWdFpYUnllUzF2Y0dWeVlYUnZjaTF2Y0dWeVlYUnZjaTFqWVRBZQpGdzB5TlRBek1ETXdPVFExTWpCYUZ3MHlOakF6TURNd09UUTFNakJhTUMweEt6QXBCZ05WQkFNVEltOXdaVzUwClpXeGxiV1YwY25rdGIzQmxjbUYwYjNJdGIzQmxjbUYwYjNJdFkyRXdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUEKQTRJQkR3QXdnZ0VLQW9JQkFRRFRKcW5rVjFLM1NHN1hNR3RJVGI4N0hLdlk1bUM0OXNRVStYajNrTlAwM1JLbQowWVA3TjZ0TDB3Qnd3bEFJSTZYL0k2ZFV4Kzlaays0YXM1RDhjanhtenhMSnlaNi8xbm84dWlMdGFrbEtuTk9yCm1EVjBmSWZVK0l6QVQ0Rno2UHRUc3dMaXR3ZjZFbkFJV0szTWQxdTZUVlhQUFJQeThkUVVTTktMUU5hVnJmMTIKZWN5elgwWkw0Q00rS0ZNWUpRUStXVnczanpzanAvWjUzdDRXR3JyQVk0bVZNMnNwS3p5WndlbmpIY25pbDhIOQpxUzg3YmtyQUE3dGpxb3RHRkxVTVY0eUpjd2NzSXN1VEpRM0F5TkZFMEsrTDBXZWRiZDhUemVoeFJNWlhuRFd2Ckd3bEppNlp0ejJ2OWZXT3BWZDkzb1NDVXhPWTcyMWNRQ1NVT0FYdWRBZ01CQUFHallUQmZNQTRHQTFVZER3RUIKL3dRRUF3SUNwREFkQmdOVkhTVUVGakFVQmdnckJnRUZCUWNEQVFZSUt3WUJCUVVIQXdJd0R3WURWUjBUQVFILwpCQVV3QXdFQi96QWRCZ05WSFE0RUZnUVVOU0ZVQytDRGp1aC8veHozM20vT1JxWWFySk13RFFZSktvWklodmNOCkFRRUxCUUFEZ2dFQkFLb2kxdTJLV0lIRDhBTy9xTlhJVS9lRy81dGY5THBWaFBESkM4U1dnc0x1d0FDZlVva1UKUkFNT3puRXA1OHVEK2JobVBxV1NpeG1jVFo4eTlmSWg5cUQ5d2hKSnFzV0pBazZoQXlnRGRJZVpSNFpPelVmdwo5Wmc4Sm9SWkJvMzdzbDVYZVNRNDV3Y2E4RmN6WVpNMmJrSlhiNGxEak9iSitRVkU2dklORTRtNjlvVlhCUzJuCmg1a0t6UFhleDRzczd6NVNXd0Q0MG42QXVZaWxCeFZvdlQxQkZwcCtkNldFcDJhYUhlRDZPaXJQWmVzSzRYTE8KSHA0aWFmejdHU1pPZVlWVWpHTlBsTk1tc24yekhaVDJjVk9yMHpUeStoV0k1RHdTVmZCejFNRUIzRmw2L200LwpVcEZIMXZ5VmtpcnpBRzBYekhDd1EyY1NMR2h1NEhadlU4bz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
service:
name: otel-opentelemetry-operator-webhook
namespace: os-system
path: /validate-opentelemetry-io-v1alpha1-instrumentation
port: 443
failurePolicy: Fail
name: vinstrumentationcreateupdate.kb.io
rules:
- apiGroups:
- opentelemetry.io
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- instrumentations
scope: Namespaced
sideEffects: None
timeoutSeconds: 10
- admissionReviewVersions:
- v1
clientConfig:
caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURSakNDQWk2Z0F3SUJBZ0lSQU1ubGpoOHJPMUtUOGR2VndCM2dKSXd3RFFZSktvWklodmNOQVFFTEJRQXcKTFRFck1Da0dBMVVFQXhNaWIzQmxiblJsYkdWdFpYUnllUzF2Y0dWeVlYUnZjaTF2Y0dWeVlYUnZjaTFqWVRBZQpGdzB5TlRBek1ETXdPVFExTWpCYUZ3MHlOakF6TURNd09UUTFNakJhTUMweEt6QXBCZ05WQkFNVEltOXdaVzUwClpXeGxiV1YwY25rdGIzQmxjbUYwYjNJdGIzQmxjbUYwYjNJdFkyRXdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUEKQTRJQkR3QXdnZ0VLQW9JQkFRRFRKcW5rVjFLM1NHN1hNR3RJVGI4N0hLdlk1bUM0OXNRVStYajNrTlAwM1JLbQowWVA3TjZ0TDB3Qnd3bEFJSTZYL0k2ZFV4Kzlaays0YXM1RDhjanhtenhMSnlaNi8xbm84dWlMdGFrbEtuTk9yCm1EVjBmSWZVK0l6QVQ0Rno2UHRUc3dMaXR3ZjZFbkFJV0szTWQxdTZUVlhQUFJQeThkUVVTTktMUU5hVnJmMTIKZWN5elgwWkw0Q00rS0ZNWUpRUStXVnczanpzanAvWjUzdDRXR3JyQVk0bVZNMnNwS3p5WndlbmpIY25pbDhIOQpxUzg3YmtyQUE3dGpxb3RHRkxVTVY0eUpjd2NzSXN1VEpRM0F5TkZFMEsrTDBXZWRiZDhUemVoeFJNWlhuRFd2Ckd3bEppNlp0ejJ2OWZXT3BWZDkzb1NDVXhPWTcyMWNRQ1NVT0FYdWRBZ01CQUFHallUQmZNQTRHQTFVZER3RUIKL3dRRUF3SUNwREFkQmdOVkhTVUVGakFVQmdnckJnRUZCUWNEQVFZSUt3WUJCUVVIQXdJd0R3WURWUjBUQVFILwpCQVV3QXdFQi96QWRCZ05WSFE0RUZnUVVOU0ZVQytDRGp1aC8veHozM20vT1JxWWFySk13RFFZSktvWklodmNOCkFRRUxCUUFEZ2dFQkFLb2kxdTJLV0lIRDhBTy9xTlhJVS9lRy81dGY5THBWaFBESkM4U1dnc0x1d0FDZlVva1UKUkFNT3puRXA1OHVEK2JobVBxV1NpeG1jVFo4eTlmSWg5cUQ5d2hKSnFzV0pBazZoQXlnRGRJZVpSNFpPelVmdwo5Wmc4Sm9SWkJvMzdzbDVYZVNRNDV3Y2E4RmN6WVpNMmJrSlhiNGxEak9iSitRVkU2dklORTRtNjlvVlhCUzJuCmg1a0t6UFhleDRzczd6NVNXd0Q0MG42QXVZaWxCeFZvdlQxQkZwcCtkNldFcDJhYUhlRDZPaXJQWmVzSzRYTE8KSHA0aWFmejdHU1pPZVlWVWpHTlBsTk1tc24yekhaVDJjVk9yMHpUeStoV0k1RHdTVmZCejFNRUIzRmw2L200LwpVcEZIMXZ5VmtpcnpBRzBYekhDd1EyY1NMR2h1NEhadlU4bz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
service:
name: otel-opentelemetry-operator-webhook
namespace: os-system
path: /validate-opentelemetry-io-v1alpha1-instrumentation
port: 443
failurePolicy: Ignore
name: vinstrumentationdelete.kb.io
rules:
- apiGroups:
- opentelemetry.io
apiVersions:
- v1alpha1
operations:
- DELETE
resources:
- instrumentations
scope: Namespaced
sideEffects: None
timeoutSeconds: 10
- admissionReviewVersions:
- v1
clientConfig:
caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURSakNDQWk2Z0F3SUJBZ0lSQU1ubGpoOHJPMUtUOGR2VndCM2dKSXd3RFFZSktvWklodmNOQVFFTEJRQXcKTFRFck1Da0dBMVVFQXhNaWIzQmxiblJsYkdWdFpYUnllUzF2Y0dWeVlYUnZjaTF2Y0dWeVlYUnZjaTFqWVRBZQpGdzB5TlRBek1ETXdPVFExTWpCYUZ3MHlOakF6TURNd09UUTFNakJhTUMweEt6QXBCZ05WQkFNVEltOXdaVzUwClpXeGxiV1YwY25rdGIzQmxjbUYwYjNJdGIzQmxjbUYwYjNJdFkyRXdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUEKQTRJQkR3QXdnZ0VLQW9JQkFRRFRKcW5rVjFLM1NHN1hNR3RJVGI4N0hLdlk1bUM0OXNRVStYajNrTlAwM1JLbQowWVA3TjZ0TDB3Qnd3bEFJSTZYL0k2ZFV4Kzlaays0YXM1RDhjanhtenhMSnlaNi8xbm84dWlMdGFrbEtuTk9yCm1EVjBmSWZVK0l6QVQ0Rno2UHRUc3dMaXR3ZjZFbkFJV0szTWQxdTZUVlhQUFJQeThkUVVTTktMUU5hVnJmMTIKZWN5elgwWkw0Q00rS0ZNWUpRUStXVnczanpzanAvWjUzdDRXR3JyQVk0bVZNMnNwS3p5WndlbmpIY25pbDhIOQpxUzg3YmtyQUE3dGpxb3RHRkxVTVY0eUpjd2NzSXN1VEpRM0F5TkZFMEsrTDBXZWRiZDhUemVoeFJNWlhuRFd2Ckd3bEppNlp0ejJ2OWZXT3BWZDkzb1NDVXhPWTcyMWNRQ1NVT0FYdWRBZ01CQUFHallUQmZNQTRHQTFVZER3RUIKL3dRRUF3SUNwREFkQmdOVkhTVUVGakFVQmdnckJnRUZCUWNEQVFZSUt3WUJCUVVIQXdJd0R3WURWUjBUQVFILwpCQVV3QXdFQi96QWRCZ05WSFE0RUZnUVVOU0ZVQytDRGp1aC8veHozM20vT1JxWWFySk13RFFZSktvWklodmNOCkFRRUxCUUFEZ2dFQkFLb2kxdTJLV0lIRDhBTy9xTlhJVS9lRy81dGY5THBWaFBESkM4U1dnc0x1d0FDZlVva1UKUkFNT3puRXA1OHVEK2JobVBxV1NpeG1jVFo4eTlmSWg5cUQ5d2hKSnFzV0pBazZoQXlnRGRJZVpSNFpPelVmdwo5Wmc4Sm9SWkJvMzdzbDVYZVNRNDV3Y2E4RmN6WVpNMmJrSlhiNGxEak9iSitRVkU2dklORTRtNjlvVlhCUzJuCmg1a0t6UFhleDRzczd6NVNXd0Q0MG42QXVZaWxCeFZvdlQxQkZwcCtkNldFcDJhYUhlRDZPaXJQWmVzSzRYTE8KSHA0aWFmejdHU1pPZVlWVWpHTlBsTk1tc24yekhaVDJjVk9yMHpUeStoV0k1RHdTVmZCejFNRUIzRmw2L200LwpVcEZIMXZ5VmtpcnpBRzBYekhDd1EyY1NMR2h1NEhadlU4bz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
service:
name: otel-opentelemetry-operator-webhook
namespace: os-system
path: /validate-opentelemetry-io-v1beta1-opentelemetrycollector
port: 443
failurePolicy: Fail
name: vopentelemetrycollectorcreateupdatebeta.kb.io
rules:
- apiGroups:
- opentelemetry.io
apiVersions:
- v1beta1
operations:
- CREATE
- UPDATE
resources:
- opentelemetrycollectors
scope: Namespaced
sideEffects: None
timeoutSeconds: 10
- admissionReviewVersions:
- v1
clientConfig:
caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURSakNDQWk2Z0F3SUJBZ0lSQU1ubGpoOHJPMUtUOGR2VndCM2dKSXd3RFFZSktvWklodmNOQVFFTEJRQXcKTFRFck1Da0dBMVVFQXhNaWIzQmxiblJsYkdWdFpYUnllUzF2Y0dWeVlYUnZjaTF2Y0dWeVlYUnZjaTFqWVRBZQpGdzB5TlRBek1ETXdPVFExTWpCYUZ3MHlOakF6TURNd09UUTFNakJhTUMweEt6QXBCZ05WQkFNVEltOXdaVzUwClpXeGxiV1YwY25rdGIzQmxjbUYwYjNJdGIzQmxjbUYwYjNJdFkyRXdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUEKQTRJQkR3QXdnZ0VLQW9JQkFRRFRKcW5rVjFLM1NHN1hNR3RJVGI4N0hLdlk1bUM0OXNRVStYajNrTlAwM1JLbQowWVA3TjZ0TDB3Qnd3bEFJSTZYL0k2ZFV4Kzlaays0YXM1RDhjanhtenhMSnlaNi8xbm84dWlMdGFrbEtuTk9yCm1EVjBmSWZVK0l6QVQ0Rno2UHRUc3dMaXR3ZjZFbkFJV0szTWQxdTZUVlhQUFJQeThkUVVTTktMUU5hVnJmMTIKZWN5elgwWkw0Q00rS0ZNWUpRUStXVnczanpzanAvWjUzdDRXR3JyQVk0bVZNMnNwS3p5WndlbmpIY25pbDhIOQpxUzg3YmtyQUE3dGpxb3RHRkxVTVY0eUpjd2NzSXN1VEpRM0F5TkZFMEsrTDBXZWRiZDhUemVoeFJNWlhuRFd2Ckd3bEppNlp0ejJ2OWZXT3BWZDkzb1NDVXhPWTcyMWNRQ1NVT0FYdWRBZ01CQUFHallUQmZNQTRHQTFVZER3RUIKL3dRRUF3SUNwREFkQmdOVkhTVUVGakFVQmdnckJnRUZCUWNEQVFZSUt3WUJCUVVIQXdJd0R3WURWUjBUQVFILwpCQVV3QXdFQi96QWRCZ05WSFE0RUZnUVVOU0ZVQytDRGp1aC8veHozM20vT1JxWWFySk13RFFZSktvWklodmNOCkFRRUxCUUFEZ2dFQkFLb2kxdTJLV0lIRDhBTy9xTlhJVS9lRy81dGY5THBWaFBESkM4U1dnc0x1d0FDZlVva1UKUkFNT3puRXA1OHVEK2JobVBxV1NpeG1jVFo4eTlmSWg5cUQ5d2hKSnFzV0pBazZoQXlnRGRJZVpSNFpPelVmdwo5Wmc4Sm9SWkJvMzdzbDVYZVNRNDV3Y2E4RmN6WVpNMmJrSlhiNGxEak9iSitRVkU2dklORTRtNjlvVlhCUzJuCmg1a0t6UFhleDRzczd6NVNXd0Q0MG42QXVZaWxCeFZvdlQxQkZwcCtkNldFcDJhYUhlRDZPaXJQWmVzSzRYTE8KSHA0aWFmejdHU1pPZVlWVWpHTlBsTk1tc24yekhaVDJjVk9yMHpUeStoV0k1RHdTVmZCejFNRUIzRmw2L200LwpVcEZIMXZ5VmtpcnpBRzBYekhDd1EyY1NMR2h1NEhadlU4bz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
service:
name: otel-opentelemetry-operator-webhook
namespace: os-system
path: /validate-opentelemetry-io-v1beta1-opentelemetrycollector
port: 443
failurePolicy: Ignore
name: vopentelemetrycollectordeletebeta.kb.io
rules:
- apiGroups:
- opentelemetry.io
apiVersions:
- v1beta1
operations:
- DELETE
resources:
- opentelemetrycollectors
scope: Namespaced
sideEffects: None
timeoutSeconds: 10
---
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: jaeger-storage-instance
namespace: os-system
labels:
applications.app.bytetrade.io/author: bytetrade.io
spec:
image: jaegertracing/jaeger:2.3.0
upgradeStrategy: none
mode: deployment
volumes:
- name: storage
emptyDir: {}
# FIXME: permission
# hostPath:
# path: /olares/userdata/otel-data
# type: DirectoryOrCreate
volumeMounts:
- name: storage
mountPath: /data
ports:
- name: jaeger
port: 16686
config:
service:
extensions: [jaeger_storage, jaeger_query]
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [jaeger_storage_exporter]
telemetry:
resource:
service.name: jaeger
metrics:
level: detailed
readers:
- pull:
exporter:
prometheus:
host: 0.0.0.0
port: 8888
logs:
level: info
extensions:
jaeger_query:
storage:
traces: some_store
traces_archive: another_store
ui:
jaeger_storage:
backends:
some_store:
badger:
directories:
keys: "/data/jaeger/"
values: "/data/jaeger/"
ephemeral: false
another_store:
badger:
directories:
keys: "/data/jaeger_archive/"
values: "/data/jaeger_archive/"
ephemeral: false
receivers:
otlp:
protocols:
grpc:
endpoint: "localhost:4317"
http:
endpoint: "localhost:4318"
processors:
batch:
send_batch_max_size: 10000
timeout: 0s
exporters:
jaeger_storage_exporter:
trace_storage: some_store
---
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: olares-instrumentation
namespace: os-system
spec:
exporter:
endpoint: http://jaeger-storage-instance-collector.os-system:4317
propagators:
- tracecontext
- baggage
- b3
sampler:
type: parentbased_traceidratio
argument: "1"
python:
env:
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://jaeger-storage-instance-collector.os-system:4318
dotnet:
env:
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://jaeger-storage-instance-collector.os-system:4318
nodejs:
env:
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://jaeger-storage-instance-collector.os-system:4318
nginx:
env:
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://jaeger-storage-instance-collector.os-system:4318
go:
env:
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://jaeger-storage-instance-collector.os-system:4318

View File

@@ -1,8 +1,6 @@
apiVersion: v2
name: gpu
name: opentelemetry
description: A Helm chart for Kubernetes
maintainers:
- name: bytetrade
# A chart can be either an 'application' or a 'library' chart.
#
@@ -17,10 +15,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
version: 0.0.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "0.1.12"
appVersion: "0.118.0"

Some files were not shown because too many files have changed in this diff Show More