Compare commits

...

106 Commits

Author SHA1 Message Date
lovehunter9
05d14de4fe fix: files sync paste dir out bug 2025-07-15 21:16:34 +08:00
wiy
058cf31e44 system-frontend&user-service: update user-service & system-frontend new version (#1544)
* feat(user-service): update dataStore use redis

* feat(wise): remove from system-frontend
fix(settings): some bugs
fix(files): some bugs

* knowledge: remove knowledge, rss, argo

---------

Co-authored-by: eball <liuy102@hotmail.com>
2025-07-15 00:39:01 +08:00
hysyeah
72a5b2c6a2 app-service, bfl, cli, authelia,kubesphere: support create user from user cr (#1543)
* app-service, bfl, cli, authelia,kubesphere: support create user by cr

* fix: rm kubesphere-monitoring-federated ns
2025-07-14 23:48:53 +08:00
eball
f78890b01b otel: disable telemetry by default (#1542) 2025-07-14 23:48:18 +08:00
eball
13df294653 olaresd: refactor api server (#1541) 2025-07-14 23:47:55 +08:00
0x7fffff92
2af86e161a fix(headscale): Make the Affinity Rule Strict (#1540)
* fix(headscale): Make the Affinity Rule Strict

* fix(headscale): make ci happy

---------

Co-authored-by: 0x7fffff92 <0x7fffff92@example.com>
2025-07-14 23:47:25 +08:00
aby913
ee567c270c fix(files): external delete (#1539)
* fix(files): external delete

* login & system-frontend: update login and system-frontend new version

---------

Co-authored-by: qq815776412 <815776412@qq.com>
2025-07-12 00:23:59 +08:00
hysyeah
4246bcce06 fix: simplify nat permission request (#1538) 2025-07-12 00:23:10 +08:00
eball
fb73d62bd5 bfl: change unmount-api of file-server (#1537) 2025-07-12 00:22:27 +08:00
eball
209f0d15e3 authelia: send notification in user login phase (#1536)
* authelia: send notification in user login phase

* fix: set cookie nil

---------

Co-authored-by: hys <hysyeah@gmail.com>
2025-07-12 00:21:48 +08:00
dkeven
78911d44cf feat(gpu): add more metrics in GPU monitor API (#1535) 2025-07-12 00:20:41 +08:00
salt
d964c33c2d feat: Chinese uses both single-character segmentation and word segmen… (#1534)
feat: Chinese uses both single-character segmentation and word segmentation. Word segmentation is used for easier sorting.

Co-authored-by: ubuntu <you@example.com>
2025-07-11 22:00:14 +08:00
salt
2b54795e10 fix: waiting... Both uppercase and lowercase letters can be searched, include special token (#1533)
fix: Both uppercase and lowercase letters can be searched, and special characters can be searched as well.'

Co-authored-by: ubuntu <you@example.com>
2025-07-11 13:20:31 +08:00
aby913
efb4be4fcf fix(files): deletion and other fixes (#1532)
* fix(files): deletion and other fixes

* feat(files & marker): update files and market new version

* feat: update market worker count

* Update bfl_deploy.yaml

---------

Co-authored-by: qq815776412 <815776412@qq.com>
Co-authored-by: icebergtsn <zyh2433219116@gmail.com>
Co-authored-by: eball <liuy102@hotmail.com>
2025-07-11 00:35:46 +08:00
simon
89575096ba feat(knowledge): knowledge & download refactor (#1531)
* knowledge

* knowledge
2025-07-10 21:36:30 +08:00
dkeven
5edba60295 fix(cli): remove state files of olaresd when uninstalling (#1530) 2025-07-10 16:12:23 +08:00
eball
1aecc3495a ci: add a parameter of the code repository (#1529)
* ci: add a parameter of the code repository

* fix: file name bug

* refactor(cli): adjust local release command for vendor repo path

---------

Co-authored-by: dkeven <dkvvven@gmail.com>
2025-07-10 16:11:03 +08:00
salt
2d5c1fc484 feat: hybrid unigram search for title (#1528)
Co-authored-by: ubuntu <you@example.com>
2025-07-09 23:20:44 +08:00
hysyeah
81355f4a1c authelia: send login message to os.users.<olaresid> (#1527) 2025-07-09 23:20:13 +08:00
lovehunter9
2c4e9fb835 feat: seafile add support for avi, wmv, mkv, flv, rmvb (#1526) 2025-07-09 23:19:32 +08:00
dkeven
4947538e68 fix(daemon): apply filters correctly when listing users (#1525) 2025-07-09 23:18:39 +08:00
Peng Peng
21bb10b72b Revert "gpu: refactor gpu scheduler with cpp (#1475)"
This reverts commit ae3e4e6bb9.
2025-07-09 13:26:41 +08:00
wiy
8064c591f2 feat(files): files supports multiple nodes (#1524)
* feat(system-frontend): update files supports multiple nodes

* feat: add files routing gateway

* feat(media-server): surpport for multiple nodes

* feat(files): update files supports multiple nodes

---------

Co-authored-by: eball <liuy102@hotmail.com>
Co-authored-by: 0x7fffff92 <0x7fffff92@example.com>
Co-authored-by: aby913 <aby913@163.com>
2025-07-08 23:11:41 +08:00
Calvin W.
1073575a1d docs: add readmes for Olares components (#1522)
* docs: add readmes for Olares components

* merge with latest upstream
2025-07-08 21:34:05 +08:00
dkeven
4cf977f6df fix(ci): specify repo when checkout code for PR (#1523) 2025-07-08 17:53:46 +08:00
hysyeah
0dda3811c7 bfl, authelia, lldap: change access-token expiry duration, support refresh and revoke user token (#1521)
bfl, authelia, lldap: change access-token expiry duration and support refresh;revoke user token after reset password
2025-07-08 00:03:59 +08:00
hysyeah
2632b45fc2 bfl, app-service, system-frontend/dashboard: remove analytics (#1520)
* bfl, app-service: remove analytics

* fix(system-frontend): remove dashboard analytics

* fix(system-frontend): update system-frontend version

---------

Co-authored-by: yyh <24493052+yongheng2016@users.noreply.github.com>
2025-07-08 00:03:11 +08:00
berg
ae3f3d6a20 market: v1.12 new category and fix some bugs. (#1518)
feat: v1.12 new category and fix some bugs.
2025-07-05 00:55:37 +08:00
eball
4f3b824f48 authelia: update oidc cert (#1516) 2025-07-05 00:54:44 +08:00
hysyeah
9efa6df969 tapr: add default perm for nats subject (#1515)
fix: add default perm for nats subject
2025-07-05 00:54:01 +08:00
dkeven
045dfc11bc perf(ci): ignore more archs when releasing cli (#1514)
* perf(ci): ignore more archs when releasing cli

* Update auth_backend_deploy.yaml

---------

Co-authored-by: eball <liuy102@hotmail.com>
2025-07-04 18:45:36 +08:00
hysyeah
9913d29f81 studio-server: move studio server to os-framework (#1513) 2025-07-04 00:42:39 +08:00
berg
0ccf091aff market, settings: fix the problem of theme settings & settings apps status & market terminusInfo error (#1512)
feat: update market frontend and backend version
2025-07-04 00:41:54 +08:00
dkeven
01f3b27b8c feat(upgrade): update sysconf for specific versions (#1511) 2025-07-04 00:41:12 +08:00
dkeven
475faafec4 fix(cli): clear upgrade-related state files when uninstalling (#1510) 2025-07-03 21:01:07 +08:00
berg
31ab286a4b market, profile: fix display error in avatar selector's image list and clear market data when terminusId changed (#1509)
feat: update market frontend and backend version
2025-07-03 00:51:40 +08:00
eball
c9b4a40a1c olares: refactor installation manifest (#1508)
* olares: refactor installation manifest

* fix: file name typo

* fix: add http accept header

* fix: bug

* fix: bug

* fix: import json
2025-07-03 00:50:09 +08:00
simon
da19d00d08 fix(download): fix download task operation & reduce youtube API requests (#1507)
download
2025-07-02 21:49:49 +08:00
dkeven
49d233a55b fix(cli): also update local reserved ports when modifying sysconf (#1506) 2025-07-02 21:49:23 +08:00
dkeven
300aaa0753 fix(daemon): handle empty pid files when check process running (#1505) 2025-07-02 21:48:56 +08:00
berg
962b220440 market: add local chart upload socket event & update menu and add search function (#1504)
* fix: omit to gen entrance url before active

* feat: update market frontend and backend version

---------

Co-authored-by: hys <hysyeah@gmail.com>
2025-07-01 23:44:31 +08:00
salt
4da25bca36 fix: when need physical path, miss use frontend_resource_uri (#1500)
* fix: 1. fix: like 'why-olares.md', if input 'why', 'olares', search without result 2.when generate_monitor_folder_path_list for convert_from_physical_path_to_frontend_resource_uri not propagate error

* fix: search3 fix when need physical path miss use frontend_resource_ui

* fix: use wrong image

---------

Co-authored-by: ubuntu <you@example.com>
2025-07-01 23:32:34 +08:00
dkeven
42eff16695 feat(cli): config endpoint_pod_names in coredns when installing (#1503) 2025-07-01 20:35:42 +08:00
dkeven
450aa19dfc fix(cli): also reserve local ports for l4-proxied service (#1502) 2025-07-01 20:35:20 +08:00
eball
c750f6f85b infisical: create user error (#1501) 2025-07-01 20:33:18 +08:00
berg
bf57da0fa4 market: waiting for the app-service to start & displays the failed status of the installation button. (#1499)
feat: update market version
2025-06-30 23:57:57 +08:00
0x7fffff92
5df379f286 feat(headscale): let headscale run on the master node like l4-bfl-proxy (#1498)
Co-authored-by: 0x7fffff92 <0x7fffff92@example.com>
2025-06-30 21:02:26 +08:00
dkeven
cfb54fb974 feat(cli): auto enable GPU when adding new node (#1497) 2025-06-30 21:02:00 +08:00
eball
9515c05bb6 bfl: do not change owner when restart (#1496) 2025-06-30 21:01:25 +08:00
dkeven
bdcd924e50 chore(cli): remove unused DeleteCache arg and module (#1495) 2025-06-30 21:01:10 +08:00
eball
e9eb218348 olaresd: refresh user expiring certs (#1493)
* feat: refresh user expiring certs

* fix: admin user not found
2025-06-30 21:00:32 +08:00
eball
9746e2c110 infisical: crash when user not found (#1492) 2025-06-30 21:00:14 +08:00
berg
27d9715292 market: multi user multi source (#1490)
* multi user & multi source & pre-render and collect image download progress & custom render variants

* support GlobalEnvs

* feat: release system-frontend: v1.3.88

* feat: app-service, studio-server

* feat: update market backend version

---------

Co-authored-by: Sai <kldtks@live.com>
Co-authored-by: hys <hysyeah@gmail.com>
2025-06-28 16:46:44 +08:00
salt
10d6c2a6fa fix: 1. fix: like 'why-olares.md', if input 'why', 'olares', search w… (#1491)
fix: 1. fix: like 'why-olares.md', if input 'why', 'olares', search without result 2.when generate_monitor_folder_path_list for convert_from_physical_path_to_frontend_resource_uri not propagate error

Co-authored-by: ubuntu <you@example.com>
2025-06-28 16:46:10 +08:00
eball
57d8a55d8d authelia: add user list api (#1489) 2025-06-27 22:07:27 +08:00
dkeven
b9a227acd7 fix(manifest): update the missed reverse proxy image version (#1488) 2025-06-27 11:27:07 +08:00
wiy
e6115794ce feat(system-frontend): update system-frontend new version to v1.3.86 (#1487) 2025-06-27 11:24:02 +08:00
dkeven
22739c90db fix(manifest): add missing app author label to argo deploy (#1486) 2025-06-27 11:23:29 +08:00
dkeven
6fac46130a perf(gpu): use our fork of dcgm-exporter with lower memory consumption (#1485) 2025-06-27 11:23:07 +08:00
simon
e19e049e7d feat(knowledge): add youtube feed and optimize the file name for aria2 download (#1481)
knowledge v0.12.12
2025-06-26 15:53:40 +08:00
wiy
1d0c20d6ad fix(system-frontend): copy nginx address error (#1484) 2025-06-26 15:16:18 +08:00
dkeven
397590d402 fix(cli): set health host of felix to lo addr explicitly (#1483) 2025-06-26 15:15:53 +08:00
hysyeah
fc1a59b79b ks,cli: remove host_ip label from some metric (#1482)
ks,cli: remove host_ip label from metric
2025-06-26 00:05:10 +08:00
eball
3dea149790 olaresd: network interface api modifed and nvstream mdns bug fix (#1480) 2025-06-26 00:04:10 +08:00
0x7fffff92
9d6834faa1 feat(tailscale): let tailscale run on the node where headscale is run… (#1479)
feat(tailscale): let tailscale run on the node where headscale is running

Co-authored-by: 0x7fffff92 <0x7fffff92@example.com>
2025-06-26 00:03:51 +08:00
dkeven
bef61309a3 feat(cli): set explicit image gc policy when installing K8s (#1478) 2025-06-26 00:03:04 +08:00
salt
cf52a59ef7 feat: search3 support multiple node for cache and external, run as daemonset (#1477)
* feat: search3 support multiple node for cache and external, and search3monitor run in daemon set

* fix: fix search3 iniialization fail because of not exist table __diesel_schema_migrations

---------

Co-authored-by: ubuntu <you@example.com>
2025-06-26 00:02:36 +08:00
wiy
80023be159 feat(system-frontend): merge system apps main (#1476)
* feat(system-frontend): merge apps into one image

* fix(system-frontend): update image version to v1.3.85

---------

Co-authored-by: yyh <24493052+yongheng2016@users.noreply.github.com>
2025-06-26 00:02:03 +08:00
eball
ae3e4e6bb9 gpu: refactor gpu scheduler with cpp (#1475) 2025-06-24 23:29:13 +08:00
dkeven
8c9e4d532b fix(daemon): upgrade runc dependency to fix vulnerability (#1473) 2025-06-24 21:33:43 +08:00
eball
3c48afb5b5 olares: move gpu package (#1474)
* olares: move gpu package

* fix: hami webui image
2025-06-24 21:32:37 +08:00
dkeven
3d22a01eef fix(cli): do not wait for recreation of pods without owner when changing ip (#1472) 2025-06-23 23:26:41 +08:00
eball
d6263bacca authelia: remove httponly option from set-cookie (#1471) 2025-06-23 23:25:55 +08:00
hysyeah
3b070ea095 node-exporter: add pcie_version,sata_version label for disk metric (#1470)
node-exporter: add pcie_version,sata_version label for node_disk_smartctl_info metric
2025-06-23 23:25:19 +08:00
dkeven
82b715635b feat: build and use hami-webui images using our own repo (#1469) 2025-06-23 23:24:38 +08:00
Peng Peng
1d4494c8d7 feat(user-service, notification, analytics): put prisma library under node_moudles in dockers (#1468)
feat: add prisma dependency to the docker
2025-06-23 11:22:31 +08:00
simon
56f5c07229 feat(knowledge): add ebook , pdf download and article extractor (#1467)
knowledge v0.12.11
2025-06-21 02:08:19 +08:00
berg
697ac440c7 wise, studio, desktop, dashboard: update system frontend version to v1.3.82 (#1466)
feat: update system frontend version to v1.3.82
2025-06-21 02:07:58 +08:00
eball
f0edbc08a6 gpu: bump libvgpu.so version (#1465) 2025-06-20 20:31:41 +08:00
eball
001607e840 authelia: add SameSite option to set-cookie (#1464) 2025-06-20 20:31:23 +08:00
dkeven
e8f525daca refactor(daemon): new scheme for upgrade APIs and operations (#1463) 2025-06-20 20:30:46 +08:00
salt
6d6f7705c9 feat: return search3 result with standard resource_urri (#1462)
* fix: fix search3 escape error

* feat: for search return resource_uri with standard mode

---------

Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
2025-06-20 11:18:01 +08:00
wiy
46b7fa0079 feat(system-frontend): update desktop files search; update dashboard chart components; (#1461) 2025-06-20 00:27:06 +08:00
hysyeah
793a62396b lldap,system-server: pub event async; chanage secret ns (#1460)
lldap,system-server: pub event async
2025-06-20 00:26:44 +08:00
eball
7cb4975f5b authelia: replace http session with lldap jwt (#1459)
* authelia: replace http session with lldap jwt

* fix: remove check auth

* fix: set default configuration

* fix: revert pg and nats configuration
2025-06-20 00:26:12 +08:00
eball
bfaf647ad1 tapr, cli:add extension vchord to pg and decrease k3s image fs threshold (#1458)
* tapr, cli:add extension vchord to pg and decrease k3s image fs threshold

* fix: image tag
2025-06-19 23:18:56 +08:00
hysyeah
23d3dc58ed lldap,tapr: add totp api (#1456) 2025-06-19 00:20:18 +08:00
yyh
7bf07f36b7 feat(system-frontend): update dashboard, control hub, and settings image (#1455)
* feat(system-frontend): update dashboard, control hub, and settings images to v1.3.80

* feat(ks_server): add environment variables for NODE_IP and TERMINUSD_HOST
2025-06-19 00:19:17 +08:00
eball
7e7117fc3a cli, daemon: persist the user name to the Olares release file (#1454) 2025-06-19 00:18:38 +08:00
hysyeah
ff159c7a29 tapr: change nats subjet name (#1452) 2025-06-17 23:38:39 +08:00
yyh
92b84ab70b feat(system-frontend/ks_server): update apps image and monitoring server versions (#1451)
* feat: update apps image  and monitoring server versions

* fix(system-frontend): update files-frontend image version to v1.3.79
2025-06-17 23:38:03 +08:00
dkeven
561d4ba93c refactor(cli): unify local release with daily build (#1450) 2025-06-17 23:37:29 +08:00
aby913
2089e42c32 files: fix files, gateway image (#1449)
files: fix files, appdata-gateway image
2025-06-17 23:37:02 +08:00
eball
b50139af5d authelia: wrong lldap service namespace configuration (#1448)
* authelia: wrong lldap service namespace configuration

* fix: change lldap secret namespace

* fix: nats namespace

* bfl: fix lldap namespace bug

* fix: app-service lldap secret

---------

Co-authored-by: hys <hysyeah@gmail.com>
2025-06-17 23:36:37 +08:00
eball
daacba2fa4 cli,bfl,app-service: new namespace structure (#1443)
* refactor: os-system namespace in yaml

* refactor: new namespace structure

* Update system-frontend.yaml

* Update lldap-deployment.yaml

* refactor: bump system server version

* fix: bfl and gpu scheduler

* fix: kubesphere,studio-server image

* tapr: bump components version

* chore(ks_server): os-system namespace split

* backup-server: bump components version

* fix: remove nats-box

* fix: restore backup svc name

* files: bump components version

* fix: replace backup deployment name

* fix: change lldap and sys-event namespace

* refactor(gpu): update hami to use gpu-scheduler in os-gpu

* fix: sign cert for otel

* fix: template bug

* fix: template bug

* fix: missing namespace

* fix: namespace label and network policy bug

* fix: service namespace

---------

Co-authored-by: Peng Peng <billpengpeng@gmail.com>
Co-authored-by: hys <hysyeah@gmail.com>
Co-authored-by: yyh <24493052+yongheng2016@users.noreply.github.com>
Co-authored-by: aby913 <aby913@163.com>
Co-authored-by: dkeven <dkvvven@gmail.com>
2025-06-16 23:12:57 +08:00
dkeven
018b3ef3cc refactor(cli): distinguish between 32-bit and 64-bit arch in release ci (#1447) 2025-06-16 21:52:57 +08:00
dkeven
ddaa0daf14 fix(daemon): do not manage network interfaces of K8s (#1446) 2025-06-16 19:50:25 +08:00
salt
13e924fcc7 fix: fix search3 error (#1444)
fix: fix search3 escape error

Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
2025-06-16 13:27:15 +08:00
wiy
6b3032f04d feat(system-frontend): update system frontend apps new version (#1441)
feat(system-frontend): update system frontend apps version
2025-06-13 00:16:22 +08:00
simon
4f08f5f341 knowledge: fix article extractor bugs (#1440)
dev
2025-06-12 23:47:24 +08:00
eball
67e91df96b daemon: add api to dashboard (#1439)
* daemon: change the module name of the olares-daemon

* daemon: add api to dashboard

* daemon: add api to dashboard
2025-06-12 23:46:56 +08:00
hysyeah
e915b70e4b fix: cpu temp metric (#1438) 2025-06-12 23:46:34 +08:00
salt
e1ca1a97db feat: remove pure lingua-rs language detection method (#1437)
* feat: remove pure lingua-rs language detection method

* feat: comment MONITOR_DETECOTR code

---------

Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
2025-06-12 21:25:38 +08:00
eball
688c4b4010 daemon: change the module name of the olares-daemon (#1436) 2025-06-12 14:23:19 +08:00
salt
52f6dc7159 fix: fix monitor document title detection language error (#1435)
* fix: fix monitor document title detection language error

* fix: when upload folder or file, rename error

---------

Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
2025-06-12 11:53:03 +08:00
aby913
9f824292d1 backup-server: fix backup period calculation (#1434) 2025-06-12 11:51:02 +08:00
448 changed files with 15148 additions and 17729 deletions

View File

@@ -65,6 +65,7 @@ jobs:
with:
version: ${{ needs.test-version.outputs.version }}
ref: ${{ github.event.pull_request.head.ref }}
repository: ${{ github.event.pull_request.head.repo.full_name }}
upload-daemon:
needs: test-version
@@ -73,6 +74,7 @@ jobs:
with:
version: ${{ needs.test-version.outputs.version }}
ref: ${{ github.event.pull_request.head.ref }}
repository: ${{ github.event.pull_request.head.repo.full_name }}
push-image:
runs-on: ubuntu-latest
@@ -132,6 +134,7 @@ jobs:
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"
VERSION: ${{ needs.test-version.outputs.version }}
REPO_PATH: '${{ secrets.REPO_PATH }}'
run: |
bash build/deps-manifest.sh && bash build/upload-deps.sh
@@ -156,6 +159,7 @@ jobs:
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"
VERSION: ${{ needs.test-version.outputs.version }}
REPO_PATH: '${{ secrets.REPO_PATH }}'
run: |
export PATH=$PATH:/usr/local/bin:/home/ubuntu/.local/bin
bash build/deps-manifest.sh linux/arm64 && bash build/upload-deps.sh linux/arm64

View File

@@ -11,27 +11,13 @@ jobs:
- name: "Checkout source code"
uses: actions/checkout@v3
- name: Install coscmd
run: pip install coscmd
- name: Configure coscmd
env:
TENCENT_SECRET_ID: ${{ secrets.TENCENT_SECRET_ID }}
TENCENT_SECRET_KEY: ${{ secrets.TENCENT_SECRET_KEY }}
COS_BUCKET: ${{ secrets.COS_BUCKET }}
COS_REGION: ${{ secrets.COS_REGION }}
END_POINT: ${{ secrets.END_POINT }}
run: |
coscmd config -a $TENCENT_SECRET_ID \
-s $TENCENT_SECRET_KEY \
-b $COS_BUCKET \
-r $COS_REGION
# test
- env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"
REPO_PATH: '${{ secrets.REPO_PATH }}'
run: |
bash build/deps-manifest.sh && bash build/upload-deps.sh
@@ -42,28 +28,12 @@ jobs:
- name: "Checkout source code"
uses: actions/checkout@v3
- name: Install coscmd
run: pip install coscmd
- name: Configure coscmd
env:
TENCENT_SECRET_ID: ${{ secrets.TENCENT_SECRET_ID }}
TENCENT_SECRET_KEY: ${{ secrets.TENCENT_SECRET_KEY }}
COS_BUCKET: ${{ secrets.COS_BUCKET }}
COS_REGION: ${{ secrets.COS_REGION }}
END_POINT: ${{ secrets.END_POINT }}
run: |
export PATH=$PATH:/usr/local/bin:/home/ubuntu/.local/bin
coscmd config -m 10 -p 10 -a $TENCENT_SECRET_ID \
-s $TENCENT_SECRET_KEY \
-b $COS_BUCKET \
-r $COS_REGION
# test
- env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"
REPO_PATH: '${{ secrets.REPO_PATH }}'
run: |
export PATH=$PATH:/usr/local/bin:/home/ubuntu/.local/bin
bash build/deps-manifest.sh linux/arm64 && bash build/upload-deps.sh linux/arm64

View File

@@ -11,22 +11,6 @@ jobs:
- name: "Checkout source code"
uses: actions/checkout@v3
- name: Install coscmd
run: pip install coscmd
- name: Configure coscmd
env:
TENCENT_SECRET_ID: ${{ secrets.TENCENT_SECRET_ID }}
TENCENT_SECRET_KEY: ${{ secrets.TENCENT_SECRET_KEY }}
COS_BUCKET: ${{ secrets.COS_BUCKET }}
COS_REGION: ${{ secrets.COS_REGION }}
END_POINT: ${{ secrets.END_POINT }}
run: |
coscmd config -a $TENCENT_SECRET_ID \
-s $TENCENT_SECRET_KEY \
-b $COS_BUCKET \
-r $COS_REGION
# test
- env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
@@ -42,23 +26,6 @@ jobs:
- name: "Checkout source code"
uses: actions/checkout@v3
- name: Install coscmd
run: pip install coscmd
- name: Configure coscmd
env:
TENCENT_SECRET_ID: ${{ secrets.TENCENT_SECRET_ID }}
TENCENT_SECRET_KEY: ${{ secrets.TENCENT_SECRET_KEY }}
COS_BUCKET: ${{ secrets.COS_BUCKET }}
COS_REGION: ${{ secrets.COS_REGION }}
END_POINT: ${{ secrets.END_POINT }}
run: |
export PATH=$PATH:/usr/local/bin:/home/ubuntu/.local/bin
coscmd config -m 10 -p 10 -a $TENCENT_SECRET_ID \
-s $TENCENT_SECRET_KEY \
-b $COS_BUCKET \
-r $COS_REGION
- env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

View File

@@ -8,7 +8,17 @@ on:
required: true
ref:
type: string
repository:
type: string
workflow_dispatch:
inputs:
version:
type: string
required: true
ref:
type: string
repository:
type: string
jobs:
goreleaser:
runs-on: ubuntu-22.04
@@ -18,6 +28,7 @@ jobs:
with:
fetch-depth: 1
ref: ${{ inputs.ref }}
repository: ${{ inputs.repository }}
- name: Add Local Git Tag For GoReleaser
run: git tag ${{ inputs.version }}
@@ -51,6 +62,5 @@ jobs:
AWS_DEFAULT_REGION: "us-east-1"
run: |
cd cli/output && for file in *.tar.gz; do
aws s3 cp "$file" s3://terminus-os-install/$file --acl=public-read
# coscmd upload $file /$file
aws s3 cp "$file" s3://terminus-os-install${{ secrets.REPO_PATH }}${file} --acl=public-read
done

View File

@@ -8,7 +8,17 @@ on:
required: true
ref:
type: string
repository:
type: string
workflow_dispatch:
inputs:
version:
type: string
required: true
ref:
type: string
repository:
type: string
jobs:
goreleaser:
@@ -19,6 +29,7 @@ jobs:
with:
fetch-depth: 1
ref: ${{ inputs.ref }}
repository: ${{ inputs.repository }}
- name: Add Local Git Tag For GoReleaser
run: git tag ${{ inputs.version }}
@@ -54,5 +65,5 @@ jobs:
AWS_DEFAULT_REGION: 'us-east-1'
run: |
cd daemon/output && for file in *.tar.gz; do
aws s3 cp "$file" s3://terminus-os-install/$file --acl=public-read
aws s3 cp "$file" s3://terminus-os-install${{ secrets.REPO_PATH }}${file} --acl=public-read
done

View File

@@ -77,6 +77,7 @@ jobs:
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"
VERSION: ${{ needs.daily-version.outputs.version }}
REPO_PATH: '${{ secrets.REPO_PATH }}'
run: |
bash build/deps-manifest.sh && bash build/upload-deps.sh
@@ -94,6 +95,7 @@ jobs:
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"
VERSION: ${{ needs.daily-version.outputs.version }}
REPO_PATH: '${{ secrets.REPO_PATH }}'
run: |
export PATH=$PATH:/usr/local/bin:/home/ubuntu/.local/bin
bash build/deps-manifest.sh linux/arm64 && bash build/upload-deps.sh linux/arm64
@@ -121,8 +123,8 @@ jobs:
AWS_DEFAULT_REGION: 'us-east-1'
run: |
md5sum install-wizard-v${{ needs.daily-version.outputs.version }}.tar.gz > install-wizard-v${{ needs.daily-version.outputs.version }}.md5sum.txt && \
aws s3 cp install-wizard-v${{ needs.daily-version.outputs.version }}.md5sum.txt s3://terminus-os-install/install-wizard-v${{ needs.daily-version.outputs.version }}.md5sum.txt --acl=public-read && \
aws s3 cp install-wizard-v${{ needs.daily-version.outputs.version }}.tar.gz s3://terminus-os-install/install-wizard-v${{ needs.daily-version.outputs.version }}.tar.gz --acl=public-read && \
aws s3 cp install-wizard-v${{ needs.daily-version.outputs.version }}.md5sum.txt s3://terminus-os-install${{ secrets.REPO_PATH }}install-wizard-v${{ needs.daily-version.outputs.version }}.md5sum.txt --acl=public-read && \
aws s3 cp install-wizard-v${{ needs.daily-version.outputs.version }}.tar.gz s3://terminus-os-install${{ secrets.REPO_PATH }}install-wizard-v${{ needs.daily-version.outputs.version }}.tar.gz --acl=public-read && \
echo "md5sum=$(awk '{print $1}' install-wizard-v${{ needs.daily-version.outputs.version }}.md5sum.txt)" >> "$GITHUB_OUTPUT"

View File

@@ -80,8 +80,8 @@ jobs:
AWS_DEFAULT_REGION: 'us-east-1'
run: |
md5sum install-wizard-v${{ github.event.inputs.tags }}.tar.gz > install-wizard-v${{ github.event.inputs.tags }}.md5sum.txt && \
aws s3 cp install-wizard-v${{ github.event.inputs.tags }}.md5sum.txt s3://terminus-os-install/install-wizard-v${{ github.event.inputs.tags }}.md5sum.txt --acl=public-read && \
aws s3 cp install-wizard-v${{ github.event.inputs.tags }}.tar.gz s3://terminus-os-install/install-wizard-v${{ github.event.inputs.tags }}.tar.gz --acl=public-read
aws s3 cp install-wizard-v${{ github.event.inputs.tags }}.md5sum.txt s3://terminus-os-install${{ secrets.REPO_PATH }}install-wizard-v${{ github.event.inputs.tags }}.md5sum.txt --acl=public-read && \
aws s3 cp install-wizard-v${{ github.event.inputs.tags }}.tar.gz s3://terminus-os-install${{ secrets.REPO_PATH }}install-wizard-v${{ github.event.inputs.tags }}.tar.gz --acl=public-read
release:
runs-on: ubuntu-latest
@@ -101,7 +101,7 @@ jobs:
- name: Get checksum
id: vars
run: |
echo "version_md5sum=$(curl -sSfL https://dc3p1870nn3cj.cloudfront.net/install-wizard-v${{ github.event.inputs.tags }}.md5sum.txt|awk '{print $1}')" >> $GITHUB_OUTPUT
echo "version_md5sum=$(curl -sSfL https://dc3p1870nn3cj.cloudfront.net${{ secrets.REPO_PATH }}install-wizard-v${{ github.event.inputs.tags }}.md5sum.txt|awk '{print $1}')" >> $GITHUB_OUTPUT
- name: Update checksum
uses: eball/write-tag-to-version-file@latest

1
.gitignore vendored
View File

@@ -31,3 +31,4 @@ olares-cli-*.tar.gz
.DS_Store
cli/output
daemon/output
daemon/bin

View File

@@ -108,20 +108,15 @@ Olares has been tested and verified on the following Linux platforms:
To get started with Olares on your own device, follow the [Getting Started Guide](https://docs.olares.com/manual/get-started/) for step-by-step instructions.
## Project navigation
> [!NOTE]
> We are currently consolidating Olares subproject code into this repository. This process may take a few months. Once finished, you will get a comprehensive view of the entire Olares system here.
This section lists the main directories in the Olares repository:
* **`apps`**: Contains the code for system applications, primarily for `larepass`.
* **`cli`**: Contains the code for `olares-cli`, the command-line interface tool for Olares.
* **`daemon`**: Contains the code for `olaresd`, the system daemon process.
* **[`apps`](./apps)**: Contains the code for system applications, primarily for `larepass`.
* **[`cli`](./cli)**: Contains the code for `olares-cli`, the command-line interface tool for Olares.
* **[`daemon`](./daemon)**: Contains the code for `olaresd`, the system daemon process.
* **`docs`**: Contains documentation for the project.
* **`framework`**: Contains the Olares system services.
* **`infrastructure`**: Contains code related to infrastructure components such as computing, storage, networking, and GPUs.
* **`platform`**: Contains code for cloud-native components like databases and message queues.
* **[`framework`](./framework)**: Contains the Olares system services.
* **[`infrastructure`](./infrastructure)**: Contains code related to infrastructure components such as computing, storage, networking, and GPUs.
* **[`platform`](./platform)**: Contains code for cloud-native components like databases and message queues.
* **`vendor`**: Contains code from third-party hardware vendors.
## Contributing to Olares

View File

@@ -110,19 +110,15 @@ Olares 已在以下 Linux 平台完成测试与验证:
参考[快速上手指南](https://docs.olares.cn/zh/manual/get-started/)安装并激活 Olares。
## 项目目录
> [!NOTE]
> 我们正将 Olares 子项目的代码移动到当前仓库。此过程可能会持续数月。届时您就可以通过本仓库了解 Olares 系统的全貌。
Olares 代码库中的主要目录如下:
* **`apps`**: 用于存放系统应用,主要是 `larepass` 的代码。
* **`cli`**: 用于存放 `olares-cli`Olares 的命令行界面工具)的代码。
* **`daemon`**: 用于存放 `olaresd`(系统守护进程)的代码。
* **[`apps`](./apps)**: 用于存放系统应用,主要是 `larepass` 的代码。
* **[`cli`](./cli)**: 用于存放 `olares-cli`Olares 的命令行界面工具)的代码。
* **[`daemon`](./daemon)**: 用于存放 `olaresd`(系统守护进程)的代码。
* **`docs`**: 用于存放 Olares 项目的文档。
* **`framework`**: 用来存放 Olares 系统服务代码。
* **`infrastructure`**: 用于存放计算存储网络GPU 等基础设施的代码。
* **`platform`**: 用于存放数据库、消息队列等云原生组件的代码。
* **[`framework`](./framework)**: 用来存放 Olares 系统服务代码。
* **[`infrastructure`](./infrastructure)**: 用于存放计算存储网络GPU 等基础设施的代码。
* **[`platform`](./platform)**: 用于存放数据库、消息队列等云原生组件的代码。
* **`vendor`**: 用于存放来自第三方硬件供应商的代码。
## 社区贡献

View File

@@ -110,18 +110,15 @@ Olaresは以下のLinuxプラットフォームで動作検証を完了してい
## プロジェクトナビゲーション
> [!NOTE]
> 現在、Olaresのサブプロジェクトのコードを当リポジトリへ移行する作業を進めています。この作業が完了するまでには数ヶ月を要する見込みです。完了後には、当リポジトリを通じてOlaresシステムの全貌をご覧いただけるようになります。
このセクションでは、Olares リポジトリ内の主要なディレクトリをリストアップしています:
* **`apps`**: システムアプリケーションのコードが含まれており、主に `larepass` 用です。
* **`cli`**: Olares のコマンドラインインターフェースツールである `olares-cli` のコードが含まれています。
* **`daemon`**: システムデーモンプロセスである `olaresd` のコードが含まれています。
* **[`apps`](./apps)**: システムアプリケーションのコードが含まれており、主に `larepass` 用です。
* **[`cli`](./cli)**: Olares のコマンドラインインターフェースツールである `olares-cli` のコードが含まれています。
* **[`daemon`](./daemon)**: システムデーモンプロセスである `olaresd` のコードが含まれています。
* **`docs`**: プロジェクトのドキュメントが含まれています。
* **`framework`**: Olares システムサービスが含まれています。
* **`infrastructure`**: コンピューティング、ストレージ、ネットワーキング、GPU などのインフラストラクチャコンポーネントに関連するコードが含まれています。
* **`platform`**: データベースやメッセージキューなどのクラウドネイティブコンポーネントのコードが含まれています。
* **[`framework`](./framework)**: Olares システムサービスが含まれています。
* **[`infrastructure`](./infrastructure)**: コンピューティング、ストレージ、ネットワーキング、GPU などのインフラストラクチャコンポーネントに関連するコードが含まれています。
* **[`platform`](./platform)**: データベースやメッセージキューなどのクラウドネイティブコンポーネントのコードが含まれています。
* **`vendor`**: サードパーティのハードウェアベンダーからのコードが含まれています。
## Olaresへの貢献

View File

@@ -1,26 +0,0 @@
apiVersion: v2
name: appstore
description: A Helm chart for Kubernetes
maintainers:
- name: bytetrade
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.0.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"

View File

@@ -1,62 +0,0 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "appstore.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "appstore.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "appstore.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "appstore.labels" -}}
helm.sh/chart: {{ include "appstore.chart" . }}
{{ include "appstore.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "appstore.selectorLabels" -}}
app.kubernetes.io/name: {{ include "appstore.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "appstore.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "appstore.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@@ -1,353 +0,0 @@
{{- $market_secret := (lookup "v1" "Secret" .Release.Namespace "market-secrets") -}}
{{- $redis_password := "" -}}
{{ if $market_secret -}}
{{ $redis_password = (index $market_secret "data" "redis-passwords") }}
{{ else -}}
{{ $redis_password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $market_backend_nats_secret := (lookup "v1" "Secret" .Release.Namespace "market-backend-nats-secret") -}}
{{- $nats_password := "" -}}
{{ if $market_backend_nats_secret -}}
{{ $nats_password = (index $market_backend_nats_secret "data" "nats_password") }}
{{ else -}}
{{ $nats_password = randAlphaNum 16 | b64enc }}
{{- end -}}
---
apiVersion: v1
kind: Secret
metadata:
name: market-backend-nats-secret
namespace: {{ .Release.Namespace }}
type: Opaque
data:
nats_password: {{ $nats_password }}
---
apiVersion: v1
kind: Secret
metadata:
name: market-secrets
namespace: {{ .Release.Namespace }}
type: Opaque
data:
redis-passwords: {{ $redis_password }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: market-deployment
namespace: {{ .Release.Namespace }}
labels:
app: appstore
applications.app.bytetrade.io/author: bytetrade.io
spec:
replicas: 1
selector:
matchLabels:
app: appstore
template:
metadata:
labels:
app: appstore
io.bytetrade.app: "true"
annotations:
instrumentation.opentelemetry.io/inject-go: "olares-instrumentation"
instrumentation.opentelemetry.io/go-container-names: "appstore-backend"
instrumentation.opentelemetry.io/otel-go-auto-target-exe: "/opt/app/market"
spec:
priorityClassName: "system-cluster-critical"
initContainers:
- args:
- -it
- authelia-backend.os-system:9091
image: owncloudci/wait-for:latest
imagePullPolicy: IfNotPresent
name: check-auth
- name: terminus-sidecar-init
image: openservicemesh/init:v1.2.3
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
capabilities:
add:
- NET_ADMIN
runAsNonRoot: false
runAsUser: 0
command:
- /bin/sh
- -c
- |
iptables-restore --noflush <<EOF
# sidecar interception rules
*nat
:PROXY_IN_REDIRECT - [0:0]
:PROXY_INBOUND - [0:0]
-A PROXY_IN_REDIRECT -p tcp -j REDIRECT --to-port 15003
-A PROXY_INBOUND -p tcp --dport 15000 -j RETURN
-A PROXY_INBOUND -p tcp -j PROXY_IN_REDIRECT
-A PREROUTING -p tcp -j PROXY_INBOUND
COMMIT
EOF
env:
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
containers:
- name: appstore-backend
image: beclab/market-backend:v0.3.12
imagePullPolicy: IfNotPresent
ports:
- containerPort: 81
env:
- name: OS_SYSTEM_SERVER
value: system-server.user-system-{{ .Values.bfl.username }}
- name: OS_APP_SECRET
value: '{{ .Values.os.appstore.appSecret }}'
- name: OS_APP_KEY
value: {{ .Values.os.appstore.appKey }}
- name: APP_SOTRE_SERVICE_SERVICE_HOST
value: appstore-server-prod.bttcdn.com
- name: MARKET_PROVIDER
value: '{{ .Values.os.appstore.marketProvider }}'
- name: APP_SOTRE_SERVICE_SERVICE_PORT
value: '443'
- name: APP_SERVICE_SERVICE_HOST
value: app-service.os-system
- name: APP_SERVICE_SERVICE_PORT
value: '6755'
- name: REPO_URL_PORT
value: "82"
- name: REDIS_ADDRESS
value: 'redis-cluster-proxy.user-system-{{ .Values.bfl.username }}:6379'
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: market-secrets
key: redis-passwords
- name: REDIS_DB_NUMBER
value: '0'
- name: REPO_URL_HOST
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NATS_HOST
value: nats.user-system-{{ .Values.bfl.username }}
- name: NATS_PORT
value: '4222'
- name: NATS_USERNAME
value: market-backend-{{ .Values.bfl.username}}
- name: NATS_PASSWORD
valueFrom:
secretKeyRef:
name: market-backend-nats-secret
key: nats_password
- name: NATS_SUBJECT_USER_APPLICATION
value: terminus.user.application.{{ .Values.bfl.username}}
volumeMounts:
- name: opt-data
mountPath: /opt/app/data
- name: terminus-envoy-sidecar
image: bytetrade/envoy:v1.25.11
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
ports:
- name: proxy-admin
containerPort: 15000
- name: proxy-inbound
containerPort: 15003
volumeMounts:
- name: terminus-sidecar-config
readOnly: true
mountPath: /etc/envoy/envoy.yaml
subPath: envoy.yaml
command:
- /usr/local/bin/envoy
- --log-level
- debug
- -c
- /etc/envoy/envoy.yaml
env:
- name: POD_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: terminus-ws-sidecar
image: 'beclab/ws-gateway:v1.0.5'
command:
- /ws-gateway
env:
- name: WS_PORT
value: '81'
- name: WS_URL
value: /app-store/v1/websocket/message
resources: { }
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
volumes:
- name: terminus-sidecar-config
configMap:
name: sidecar-ws-configs
items:
- key: envoy.yaml
path: envoy.yaml
- name: opt-data
hostPath:
path: '{{ .Values.userspace.appData}}/appstore/data'
type: DirectoryOrCreate
- name: app
emptyDir: {}
- name: nginx-confd
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: appstore-service
namespace: {{ .Release.Namespace }}
spec:
selector:
app: appstore
type: ClusterIP
ports:
- protocol: TCP
name: appstore-backend
port: 81
targetPort: 81
---
apiVersion: sys.bytetrade.io/v1alpha1
kind: ApplicationPermission
metadata:
name: appstore
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: appstore
appid: appstore
key: {{ .Values.os.appstore.appKey }}
secret: {{ .Values.os.appstore.appSecret }}
permissions:
- dataType: event
group: message-disptahcer.system-server
ops:
- Create
version: v1
- dataType: app
group: service.bfl
ops:
- UserApps
version: v1
status:
state: active
---
apiVersion: sys.bytetrade.io/v1alpha1
kind: ProviderRegistry
metadata:
name: appstore-backend-provider
namespace: user-system-{{ .Values.bfl.username }}
spec:
dataType: app
deployment: market
description: app store provider
endpoint: appstore-service.{{ .Release.Namespace }}:81
group: service.appstore
kind: provider
namespace: {{ .Release.Namespace }}
opApis:
- name: InstallDevApp
uri: /app-store/v1/applications/provider/installdev
- name: UninstallDevApp
uri: /app-store/v1/applications/provider/uninstalldev
version: v1
status:
state: active
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: market-redis
namespace: {{ .Release.Namespace }}
spec:
app: market
appNamespace: {{ .Release.Namespace }}
middleware: redis
redis:
password:
valueFrom:
secretKeyRef:
key: redis-passwords
name: market-secrets
namespace: market
---
apiVersion: v1
kind: Service
metadata:
name: appstore-svc
namespace: {{ .Release.Namespace }}
spec:
type: ClusterIP
selector:
app: appstore
ports:
- name: "appstore-backend"
protocol: TCP
port: 81
targetPort: 81
- name: "appstore-websocket"
protocol: TCP
port: 40010
targetPort: 40010
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: market-backend-nats
namespace: {{ .Release.Namespace }}
spec:
app: market-backend
appNamespace: user
middleware: nats
nats:
password:
valueFrom:
secretKeyRef:
key: nats_password
name: market-backend-nats-secret
refs:
- appName: user-service
appNamespace: user
subjects:
- name: "application.*"
perm:
- pub
- sub
- appName: user-service
appNamespace: user
subjects:
- name: "market.*"
perm:
- pub
- sub
user: market-backend-{{ .Values.bfl.username}}

View File

@@ -1,44 +0,0 @@
bfl:
nodeport: 30883
nodeport_ingress_http: 30083
nodeport_ingress_https: 30082
username: 'test'
url: 'test'
nodeName: test
pvc:
userspace: test
userspace:
userData: test/Home
appData: test/Data
appCache: test
dbdata: test
docs:
nodeport: 30881
desktop:
nodeport: 30180
os:
portfolio:
appKey: '${ks[0]}'
appSecret: test
vault:
appKey: '${ks[0]}'
appSecret: test
desktop:
appKey: '${ks[0]}'
appSecret: test
message:
appKey: '${ks[0]}'
appSecret: test
rss:
appKey: '${ks[0]}'
appSecret: test
search:
appKey: '${ks[0]}'
appSecret: test
search2:
appKey: '${ks[0]}'
appSecret: test
appstore:
marketProvider: ''
kubesphere:
redis_password: ""

View File

@@ -1,294 +1,13 @@
{{- $namespace := printf "%s%s" "user-system-" .Values.bfl.username -}}
{{- $studio_secret := (lookup "v1" "Secret" $namespace "studio-secrets") -}}
{{- $pg_password := "" -}}
{{ if $studio_secret -}}
{{ $pg_password = (index $studio_secret "data" "pg_password") }}
{{ else -}}
{{ $pg_password = randAlphaNum 16 | b64enc }}
{{- end -}}
---
apiVersion: v1
kind: Secret
metadata:
name: studio-secrets
namespace: user-system-{{ .Values.bfl.username }}
type: Opaque
data:
pg_password: {{ $pg_password }}
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: studio-pg
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: studio
appNamespace: {{ .Release.Namespace }}
middleware: postgres
postgreSQL:
user: studio_{{ .Values.bfl.username }}
password:
valueFrom:
secretKeyRef:
key: pg_password
name: studio-secrets
databases:
- name: studio
---
apiVersion: v1
kind: Service
metadata:
name: studio-server
namespace: {{ .Release.Namespace }}
namespace: user-space-{{ .Values.bfl.username }}
spec:
selector:
app: studio-server
type: ExternalName
externalName: studio-server.os-framework.svc.cluster.local
ports:
- protocol: TCP
name: studio-server
port: 8080
targetPort: 8088
name: http
- protocol: TCP
port: 8083
targetPort: 8083
name: https
---
kind: Service
apiVersion: v1
metadata:
name: chartmuseum-studio
namespace: {{ .Release.Namespace }}
spec:
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8888
selector:
app: studio-server
---
apiVersion: v1
kind: ConfigMap
metadata:
name: studio-san-cnf
namespace: {{ .Release.Namespace }}
data:
san.cnf: |
[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no
[req_distinguished_name]
countryName = CN
stateOrProvinceName = Beijing
localityName = Beijing
0.organizationName = bytetrade
commonName = studio-server.{{ .Release.Namespace }}.svc
[v3_req]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @bytetrade
[bytetrade]
DNS.1 = studio-server.{{ .Release.Namespace }}.svc
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: studio-server
namespace: {{ .Release.Namespace }}
labels:
app: studio-server
applications.app.bytetrade.io/author: bytetrade.io
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: studio-server
template:
metadata:
labels:
app: studio-server
spec:
serviceAccountName: bytetrade-controller
volumes:
- name: chart
hostPath:
type: DirectoryOrCreate
path: '{{ .Values.userspace.appData}}/studio/Chart'
- name: data
hostPath:
type: DirectoryOrCreate
path: '{{ .Values.userspace.appData }}/studio/Data'
- name: storage-volume
hostPath:
path: '{{ .Values.userspace.appData }}/studio/helm-repo-dev'
type: DirectoryOrCreate
- name: config-san
configMap:
name: studio-san-cnf
items:
- key: san.cnf
path: san.cnf
- name: certs
emptyDir: {}
initContainers:
- name: init-chmod-data
image: busybox:1.28
imagePullPolicy: IfNotPresent
command:
- sh
- '-c'
- |
chown -R 1000:1000 /home/coder
chown -R 65532:65532 /charts
chown -R 65532:65532 /data
securityContext:
runAsUser: 0
resources: { }
volumeMounts:
- name: storage-volume
mountPath: /home/coder
- name: chart
mountPath: /charts
- name: data
mountPath: /data
- name: generate-certs
image: beclab/openssl:v3
imagePullPolicy: IfNotPresent
command: [ "/bin/sh", "-c" ]
args:
- |
openssl genrsa -out /etc/certs/ca.key 2048
openssl req -new -x509 -days 3650 -key /etc/certs/ca.key -out /etc/certs/ca.crt \
-subj "/CN=bytetrade CA/O=bytetrade/C=CN"
openssl req -new -newkey rsa:2048 -nodes \
-keyout /etc/certs/server.key -out /etc/certs/server.csr \
-config /etc/san/san.cnf
openssl x509 -req -days 3650 -in /etc/certs/server.csr \
-CA /etc/certs/ca.crt -CAkey /etc/certs/ca.key \
-CAcreateserial -out /etc/certs/server.crt \
-extensions v3_req -extfile /etc/san/san.cnf
chown -R 65532 /etc/certs/*
volumeMounts:
- name: config-san
mountPath: /etc/san
- name: certs
mountPath: /etc/certs
containers:
- name: studio
image: beclab/studio-server:v0.1.50
imagePullPolicy: IfNotPresent
args:
- server
ports:
- name: port
containerPort: 8088
protocol: TCP
- name: ssl-port
containerPort: 8083
protocol: TCP
volumeMounts:
- name: chart
mountPath: /charts
- name: data
mountPath: /data
- mountPath: /etc/certs
name: certs
lifecycle:
preStop:
exec:
command:
- "/studio"
- "clean"
env:
- name: BASE_DIR
value: /charts
- name: OS_API_KEY
value: {{ .Values.os.studio.appKey }}
- name: OS_API_SECRET
value: {{ .Values.os.studio.appSecret }}
- name: OS_SYSTEM_SERVER
value: system-server.user-system-{{ .Values.bfl.username }}
- name: NAME_SPACE
value: {{ .Release.Namespace }}
- name: OWNER
value: '{{ .Values.bfl.username }}'
- name: DB_HOST
value: citus-master-svc.user-system-{{ .Values.bfl.username }}
- name: DB_USERNAME
value: studio_{{ .Values.bfl.username }}
- name: DB_PASSWORD
value: "{{ $pg_password | b64dec }}"
- name: DB_NAME
value: user_space_{{ .Values.bfl.username }}_studio
- name: DB_PORT
value: "5432"
resources:
requests:
cpu: "50m"
memory: 100Mi
limits:
cpu: "0.5"
memory: 1000Mi
- name: chartmuseum
image: aboveos/helm-chartmuseum:v0.15.0
args:
- '--port=8888'
- '--storage-local-rootdir=/storage'
ports:
- name: http
containerPort: 8888
protocol: TCP
env:
- name: CHART_POST_FORM_FIELD_NAME
value: chart
- name: DISABLE_API
value: 'false'
- name: LOG_JSON
value: 'true'
- name: PROV_POST_FORM_FIELD_NAME
value: prov
- name: STORAGE
value: local
resources:
requests:
cpu: "50m"
memory: 100Mi
limits:
cpu: 1000m
memory: 512Mi
volumeMounts:
- name: storage-volume
mountPath: /storage
livenessProbe:
httpGet:
path: /health
port: http
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /health
port: http
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
targetPort: 8080

View File

@@ -22,42 +22,10 @@ spec:
initContainers:
- args:
- -it
- authelia-backend.os-system:9091
- authelia-backend.os-framework:9091
image: owncloudci/wait-for:latest
imagePullPolicy: IfNotPresent
name: check-auth
# - name: terminus-sidecar-init
# image: openservicemesh/init:v1.2.3
# imagePullPolicy: IfNotPresent
# securityContext:
# privileged: true
# capabilities:
# add:
# - NET_ADMIN
# runAsNonRoot: false
# runAsUser: 0
# command:
# - /bin/sh
# - -c
# - |
# iptables-restore --noflush <<EOF
# # sidecar interception rules
# *nat
# :PROXY_IN_REDIRECT - [0:0]
# :PROXY_INBOUND - [0:0]
# -A PROXY_IN_REDIRECT -p tcp -j REDIRECT --to-port 15003
# -A PROXY_INBOUND -p tcp --dport 15000 -j RETURN
# -A PROXY_INBOUND -p tcp -j PROXY_IN_REDIRECT
# -A PREROUTING -p tcp -j PROXY_INBOUND
# COMMIT
# EOF
# env:
# - name: POD_IP
# valueFrom:
# fieldRef:
# apiVersion: v1
# fieldPath: status.podIP
containers:
- name: wizard
@@ -68,77 +36,11 @@ spec:
env:
- name: apiServerURL
value: http://bfl.{{ .Release.Namespace }}:8080
# - name: wizard-server
# image: aboveos/wizard-server:v0.4.2
# imagePullPolicy: IfNotPresent
# volumeMounts:
# - name: userspace-dir
# mountPath: /Home
# ports:
# - containerPort: 3000
# env:
# - name: OS_SYSTEM_SERVER
# value: system-server.user-system-{{ .Values.bfl.username }}
# - name: OS_APP_SECRET
# value: '{{ .Values.os.desktop.appSecret }}'
# - name: OS_APP_KEY
# value: {{ .Values.os.desktop.appKey }}
# - name: APP_SERVICE_SERVICE_HOST
# value: app-service.os-system
# - name: APP_SERVICE_SERVICE_PORT
# value: '6755'
# - name: terminus-envoy-sidecar
# image: bytetrade/envoy:v1.25.11
# imagePullPolicy: IfNotPresent
# securityContext:
# allowPrivilegeEscalation: false
# runAsUser: 1000
# ports:
# - name: proxy-admin
# containerPort: 15000
# - name: proxy-inbound
# containerPort: 15003
# volumeMounts:
# - name: terminus-sidecar-config
# readOnly: true
# mountPath: /etc/envoy/envoy.yaml
# subPath: envoy.yaml
# command:
# - /usr/local/bin/envoy
# - --log-level
# - debug
# - -c
# - /etc/envoy/envoy.yaml
# env:
# - name: POD_UID
# valueFrom:
# fieldRef:
# fieldPath: metadata.uid
# - name: POD_NAME
# valueFrom:
# fieldRef:
# fieldPath: metadata.name
# - name: POD_NAMESPACE
# valueFrom:
# fieldRef:
# fieldPath: metadata.namespace
# - name: POD_IP
# valueFrom:
# fieldRef:
# fieldPath: status.podIP
volumes:
- name: userspace-dir
hostPath:
type: Directory
path: "{{ .Values.userspace.userData }}"
# - name: terminus-sidecar-config
# configMap:
# name: sidecar-configs
# items:
# - key: envoy.yaml
# path: envoy.yaml
---
apiVersion: v1

View File

@@ -0,0 +1,20 @@
# Olares Apps
## Overview
This directory contains the code for system applications, primarily for LarePass. The following are the pre-installed system applications that offer tools for managing files, knowledge, passwords, and the system itself.
## System Applications Overview
| Application | Description |
| --- | --- |
| Files | A file management app that manages and synchronizes files across devices and sources, enabling seamless sharing and access. |
| Wise | A local-first and AI-native modern reader that helps to collect, read, and manage information from various platforms. Users can run self-hosted recommendation algorithms to filter and sort online content. |
| Vault | A secure password manager for storing and managing sensitive information across devices. |
| Market | A decentralized and permissionless app store for installing, uninstalling, and updating applications and recommendation algorithms. |
| Desktop | A hub for managing and interacting with installed applications. File and application searching are also supported. |
| Profile | An app to customize the user's profile page. |
| Settings | A system configuration application. |
| Dashboard | An app for monitoring system resource usage. |
| Control Hub | The console for Olares, providing precise and autonomous control over the system and its environment. |
| DevBox | A development tool for building and deploying Olares applications. |

View File

@@ -1003,7 +1003,7 @@ _get_sts_bfl() {
_get_deployment_backup_server() {
local res
res=$($sh_c "${KUBECTL} -n os-system get deployment backup-server 2>/dev/null")
res=$($sh_c "${KUBECTL} -n os-framework get deployment backup 2>/dev/null")
if [ "$?" -ne 0 ]; then
echo 0
fi

View File

@@ -30,7 +30,7 @@ repaire_crd_terminus() {
if [ ! -z "${AWS_SESSION_TOKEN_SETUP}" ]; then
patch='[{"op":"add","path":"/metadata/annotations/bytetrade.io~1s3-sts","value":"'"$AWS_SESSION_TOKEN_SETUP"'"},{"op":"add","path":"/metadata/annotations/bytetrade.io~1s3-ak","value":"'"$AWS_ACCESS_KEY_ID_SETUP"'"},{"op":"add","path":"/metadata/annotations/bytetrade.io~1s3-sk","value":"'"$AWS_SECRET_ACCESS_KEY_SETUP"'"},{"op":"add","path":"/metadata/annotations/bytetrade.io~1cluster-id","value":"'"$CLUSTER_ID"'"}]'
$sh_c "${KUBECTL} patch terminus.sys.bytetrade.io terminus -n os-system --type='json' -p='$patch'"
$sh_c "${KUBECTL} patch terminus.sys.bytetrade.io terminus --type='json' -p='$patch'"
fi
}

View File

@@ -1,616 +0,0 @@
#!/usr/bin/env bash
# Upgrading will be executed in app-service container based on kubesphere/kubectl:v1.22.9
# By default, the tool packages will be installed via apt during the docker build
# env:
# BASE_DIR
function command_exists() {
command -v "$@" > /dev/null 2>&1
}
function get_shell_exec(){
user="$(id -un 2>/dev/null || true)"
sh_c='sh -c'
if [ "$user" != 'root' ]; then
if command_exists sudo && command_exists su; then
sh_c='sudo su -c'
else
cat >&2 <<-'EOF'
Error: this installer needs the ability to run commands as root.
We are unable to find either "sudo" or "su" available to make this happen.
EOF
exit 1
fi
fi
}
function get_bfl_api_port(){
local username=$1
$sh_c "${KUBECTL} get svc bfl -n user-space-${username} -o jsonpath='{.spec.ports[0].nodePort}'"
}
# function get_docs_port(){
# local username=$1
# $sh_c "${KUBECTL} get svc swagger-ui -n user-space-${username} -o jsonpath='{.spec.ports[0].nodePort}'"
# }
function get_desktop_port(){
local username=$1
$sh_c "${KUBECTL} get svc edge-desktop -n user-space-${username} -o jsonpath='{.spec.ports[0].nodePort}'"
}
function get_user_password(){
local username=$1
$sh_c "${KUBECTL} get user ${username} -o jsonpath='{.spec.password}'"
}
function get_user_email(){
local username=$1
$sh_c "${KUBECTL} get user ${username} -o jsonpath='{.spec.email}'"
}
function ensure_success() {
"$@"
local ret=$?
if [ $ret -ne 0 ]; then
echo "Fatal error, command: '$@'"
exit $ret
fi
return $ret
}
function validate_user(){
local username=$1
$sh_c "${KUBECTL} get ns user-space-${username} > /dev/null"
local ret=$?
if [ $ret -ne 0 ]; then
echo "no"
else
echo "yes"
fi
}
function get_bfl_node(){
local username=$1
$sh_c "${KUBECTL} get pod -n user-space-${username} -l 'tier=bfl' -o jsonpath='{.items[*].spec.nodeName}'"
}
function get_bfl_url() {
local username=$1
local user_bfl_port=$(get_bfl_api_port ${username})
bfl_ip=$(curl -s http://checkip.dyndns.org/ | grep -o "[[:digit:].]\+")
echo "http://$bfl_ip:${user_bfl_port}/bfl/apidocs.json"
}
function get_userspace_dir(){
local username=$1
local space_dir=$2
$sh_c "${KUBECTL} get pod -n user-space-${username} -l 'tier=bfl' -o \
jsonpath='{range .items[0].spec.volumes[*]}{.name}{\" \"}{.persistentVolumeClaim.claimName}{\"\\n\"}{end}'" | \
while read pvc; do
local pvc_data=($pvc)
if [ ${#pvc_data[@]} -gt 1 ]; then
if [ "x${pvc_data[0]}" == "x${space_dir}" ]; then
local USERSPACE_PVC="${pvc_data[1]}"
local pv=$($sh_c "${KUBECTL} get pvc -n user-space-${username} ${pvc_data[1]} -o jsonpath='{.spec.volumeName}'")
local pv_path=$($sh_c "${KUBECTL} get pv ${pv} -o jsonpath='{.spec.hostPath.path}'")
local USERSPACE_PV_PATH="${pv_path}"
echo "${USERSPACE_PVC} ${USERSPACE_PV_PATH} ${pv}"
break
fi
fi
done
}
function get_bfl_rand16(){
local username=$1
local prefix=$2
$sh_c "${KUBECTL} get sts -n user-space-${username} bfl -o jsonpath='{.metadata.annotations.${prefix}_rand16}'"
}
function gen_app_key_secret(){
local app=$1
local key="bytetrade_${app}_${RANDOM}"
local t=$(date +%s)
local secret=$(echo -n "${key}|${t}"|md5sum|cut -d" " -f1)
echo "${key} ${secret:0:16}"
}
function get_app_key_secret(){
local username=$1
local app=$2
local ks=$($sh_c "${KUBECTL} get appperm ${app} -n user-system-${username} -o jsonpath='{.spec.key} {.spec.secret}'")
if [ "x${ks}" == "x" ]; then
ks=$(gen_app_key_secret "${app}")
fi
echo "${ks}"
}
function get_app_settings(){
local username=$1
local apps=("vault" "desktop" "message" "wise" "search" "appstore" "notification" "dashboard" "settings" "studio" "profile" "agent" "files")
for a in ${apps[@]};do
ks=($(get_app_key_secret "$username" "$a"))
echo '
'${a}':
appKey: '${ks[0]}'
appSecret: "'${ks[1]}'"
'
done
}
function gen_bfl_values(){
local username=$1
local user_bfl_port=$(get_bfl_api_port ${username})
echo "Try to find the current bfl pv ..."
local pvc_path=($(get_userspace_dir ${username} "userspace-dir"))
local appcache_pvc_path=($(get_userspace_dir ${username} "appcache-dir"))
local dbdata_pvc_path=($(get_userspace_dir ${username} "dbdata-dir"))
local userspace_rand16=$(get_userspace_dir ${username} "userspace")
local appcache_rand16=$(get_userspace_dir ${username} "Cache")
local dbdata_rand16=$(get_userspace_dir ${username} "dbdata")
echo '
bfl:
nodeport: '${user_bfl_port}'
username: '${username}'
userspace_rand16: '${userspace_rand16}'
userspace_pv: '${pvc_path[2]}'
userspace_pvc: '${pvc_path[0]}'
appcache_rand16: '${appcache_rand16}'
appcache_pv: '${appcache_pvc_path[2]}'
appcache_pvc: '${appcache_pvc_path[0]}'
dbdata_rand16: '${dbdata_rand16}'
dbdata_pv: '${dbdata_pvc_path[2]}'
dbdata_pvc: '${dbdata_pvc_path[0]}'
' > ${BASE_DIR}/wizard/config/launcher/values.yaml
}
function gen_settings_values(){
local username=$1
# local userpwd="$(get_user_password ${username})"
# local useremail="$(get_user_email ${username})"
echo '
namespace:
name: user-space-'${username}'
role: admin
user:
name: '${username}'
' > ${BASE_DIR}/wizard/config/settings/values.yaml
}
function gen_app_values(){
local username=$1
local bfl_node=$(get_bfl_node ${username})
local bfl_doc_url=$(get_bfl_url ${username})
local desktop_ports=$(get_desktop_port ${username})
# local docs_ports=$(get_docs_port ${username})
echo "Try to find pv ..."
local pvc_path=($(get_userspace_dir ${username} "userspace-dir"))
local appcache_pvc_path=($(get_userspace_dir ${username} "appcache-dir"))
local dbdata_pvc_path=($(get_userspace_dir ${username} "dbdata-dir"))
local app_perm_settings=$(get_app_settings ${username})
cat ${BASE_DIR}/wizard/config/launcher/values.yaml > ${BASE_DIR}/wizard/config/apps/values.yaml
cat << EOF >> ${BASE_DIR}/wizard/config/apps/values.yaml
url: '${bfl_doc_url}'
nodeName: ${bfl_node}
pvc:
userspace: ${pvc_path[0]}
userspace:
appCache: ${appcache_pvc_path[1]}
dbdata: ${dbdata_pvc_path[1]}
userData: ${pvc_path[1]}/Home
appData: ${pvc_path[1]}/Data
desktop:
nodeport: ${desktop_ports}
os:
${app_perm_settings}
EOF
}
function close_apps(){
local username=$1
local app_list=(
"vault-deployment"
)
for app in ${app_list[@]} ; do
$sh_c "${KUBECTL} scale deployment ${app} -n user-space-${username} --replicas=0"
done
}
repeat(){
for i in $(seq 1 $1); do
echo -n $2
done
}
function get_appservice_pod(){
$sh_c "${KUBECTL} get pod -n os-system -l 'tier=app-service' -o jsonpath='{.items[*].metadata.name}'"
}
function get_appservice_status(){
$sh_c "${KUBECTL} get pod -n os-system -l 'tier=app-service' -o jsonpath='{.items[*].status.phase}'"
}
function get_desktop_status(){
local username=$1
$sh_c "${KUBECTL} get pod -n user-space-${username} -l 'app=edge-desktop' -o jsonpath='{.items[*].status.phase}'"
}
function get_vault_status(){
local username=$1
$sh_c "${KUBECTL} get pod -n user-space-${username} -l 'app=vault' -o jsonpath='{.items[*].status.phase}'"
}
function get_bfl_status(){
local username=$1
$sh_c "${KUBECTL} get pod -n user-space-${username} -l 'tier=bfl' -o jsonpath='{.items[*].status.phase}'"
}
function get_fileserver_status(){
$sh_c "${KUBECTL} get pod -n os-system -l 'app=files' -o jsonpath='{.items[*].status.phase}'"
}
function get_filefe_status(){
local username=$1
$sh_c "${KUBECTL} get pod -n user-space-${username} -l 'app=files' -o jsonpath='{.items[*].status.phase}'"
}
function check_fileserver(){
local status=$(get_fileserver_status)
local n=0
while [ "x${status}" != "xRunning" ]; do
n=$(expr $n + 1)
local dotn=$(($n % 10))
local dot=$(repeat $dotn '>')
echo -ne "\rWaiting for file-server starting ${dot}"
sleep 0.5
status=$(get_fileserver_status)
echo -ne "\rWaiting for file-server starting "
done
echo
}
function check_appservice(){
local status=$(get_appservice_status)
local n=0
while [ "x${status}" != "xRunning" ]; do
n=$(expr $n + 1)
local dotn=$(($n % 10))
local dot=$(repeat $dotn '>')
echo -ne "\rWaiting for app-service starting ${dot}"
sleep 0.5
status=$(get_appservice_status)
echo -ne "\rWaiting for app-service starting "
done
echo
}
function check_filesfe(){
local username=$1
local status=$(get_filefe_status ${username})
local n=0
while [ "x${status}" != "xRunning" ]; do
n=$(expr $n + 1)
local dotn=$(($n % 10))
local dot=$(repeat $dotn '>')
echo -ne "\rPlease waiting ${dot}"
sleep 0.5
status=$(get_filefe_status ${username})
echo -ne "\rPlease waiting "
done
echo
}
function check_bfl(){
local username=$1
local status=$(get_bfl_status ${username})
local n=0
while [ "x${status}" != "xRunning" ]; do
n=$(expr $n + 1)
local dotn=$(($n % 10))
local dot=$(repeat $dotn '>')
echo -ne "\rPlease waiting ${dot}"
sleep 0.5
status=$(get_bfl_status ${username})
echo -ne "\rPlease waiting "
done
echo
}
function check_desktop(){
local username=$1
local status=$(get_desktop_status ${username})
local n=0
while [ "x${status}" != "xRunning" ]; do
n=$(expr $n + 1)
local dotn=$(($n % 10))
local dot=$(repeat $dotn '>')
echo -ne "\rPlease waiting ${dot}"
sleep 0.5
status=$(get_desktop_status ${username})
echo -ne "\rPlease waiting "
done
echo
}
function check_vault(){
local username=$1
local status=$(get_vault_status ${username})
local n=0
while [ "x${status}" != "xRunning" ]; do
n=$(expr $n + 1)
local dotn=$(($n % 10))
local dot=$(repeat $dotn '>')
echo -ne "\rPlease waiting ${dot}"
sleep 0.5
status=$(get_vault_status ${username})
echo -ne "\rPlease waiting "
done
echo
}
function check_all(){
local pods=$@
for p in ${pods[@]}; do
local n=$(echo "${p}"|awk -F"@" '{print $1}')
local ns=$(echo "${p}"|awk -F"@" '{print $2}')
local s=$($sh_c "${KUBECTL} get pod -n ${ns} -l 'app=${n}' -o jsonpath='{.items[*].status.phase}'")
echo -ne "\rPlease wait: ${p}"
while [ "x${s}" != "xRunning" ];do
echo -ne "\rPlease wait: ${p}"
s=$($sh_c "${KUBECTL} get pod -n ${ns} -l 'app=${n}' -o jsonpath='{.items[*].status.phase}'")
done
echo
done
}
function upgrade_ksapi(){
local users=$@
local current_version="beclab/ks-apiserver:v3.3.0-ext-3"
local image=$($sh_c "${KUBECTL} get deploy ks-apiserver -n kubesphere-system -o jsonpath='{.spec.template.spec.containers[0].image}'")
if [ "x${image}" != "x${current_version}" ]; then
echo "upgrade ks-apiserver and restore token ..."
secret=$(echo -n "ks_redis_${RANDOM}"|md5sum|cut -d" " -f1)
$sh_c "${KUBECTL} -n kubesphere-system create secret generic redis-secret --from-literal=auth=${secret:0:12}"
local old_jwt=$($sh_c "${KUBECTL} get configmap kubesphere-config -n kubesphere-system -o jsonpath='{.data.kubesphere\.yaml}'|grep jwtSecret|awk -F':' '{print \$2}'")
sed -i -e "s/__jwtkey__/${old_jwt}/" ${BASE_DIR}/deploy/cm-kubesphere-config.yaml
$sh_c "${KUBECTL} apply -f ${BASE_DIR}/deploy/redis-deploy.yaml"
$sh_c "${KUBECTL} apply -f ${BASE_DIR}/deploy/cm-kubesphere-config.yaml"
check_all "redis@kubesphere-system"
$sh_c "${KUBECTL} -n kubesphere-system set image deployment/ks-apiserver ks-apiserver=beclab/ks-apiserver:v3.3.0-ext-3"
$sh_c "${KUBECTL} patch deploy ks-apiserver -n kubesphere-system --patch-file=${BASE_DIR}/deploy/ks-apiserver-patch.yaml"
check_all "ks-apiserver@kubesphere-system"
for username in ${users[@]}; do
$sh_c "${KUBECTL} rollout restart deploy authelia-backend -n user-system-${username}"
check_all "authelia-backend@user-system-${username}"
done
fi
}
function upgrade_jfs(){
local users=$@
local JFS_VERSION="11.1.1"
local current_jfs_version=$(/usr/local/bin/juicefs --version|awk '{print $3}'|awk -F'+' '{print $1}')
if [ "x${JFS_VERSION}" != "x${current_jfs_version}" ]; then
echo "upgrade JuiceFS ..."
local juicefs_bin="/usr/local/bin/juicefs"
ensure_success $sh_c "curl ${CURL_TRY} -kLO https://github.com/beclab/juicefs-ext/releases/download/v${JFS_VERSION}/juicefs-v${JFS_VERSION}-linux-amd64.tar.gz"
ensure_success $sh_c "tar -zxf juicefs-v${JFS_VERSION}-linux-amd64.tar.gz"
ensure_success $sh_c "chmod +x juicefs"
ensure_success $sh_c "systemctl stop juicefs"
ensure_success $sh_c "mv juicefs ${juicefs_bin}"
ensure_success $sh_c "rm -f /tmp/JuiceFS-IPC.sock"
ensure_success $sh_c "systemctl start juicefs"
echo "restart pods ... "
ensure_success $sh_c "${KUBECTL} rollout restart sts app-service -n os-system"
local tf=$(mktemp)
ensure_success $sh_c "${KUBECTL} get deployment -A -o jsonpath='{range .items[*]}{.metadata.name} {.metadata.namespace} {.spec.template.spec.volumes}{\"\n\"}{end}' | grep '/olares/rootfs'" > $tf
while read dep; do
local depinfo=($dep)
ensure_success $sh_c "${KUBECTL} rollout restart deployment ${depinfo[0]} -n ${depinfo[1]}"
done < $tf
for user in ${users[@]}; do
ensure_success $sh_c "${KUBECTL} rollout restart sts bfl -n user-space-${user}"
done
sleep 10 # waiting for restarting to begin
fi
}
function upgrade_terminus(){
HELM=$(command -v helm)
KUBECTL=$(command -v kubectl)
# find sudo
get_shell_exec
# fetch user list
local users=()
local admin_user=""
local tf=$(mktemp)
ensure_success $sh_c "${KUBECTL} get user -o jsonpath='{range .items[*]}{.metadata.name} {.metadata.annotations.bytetrade\.io\/owner-role}{\"\n\"}{end}'" > $tf
while read userdata; do
local userinfo=($userdata)
local valid=$(validate_user "${userinfo[0]}")
if [ "x-${valid}" == "x-yes" ]; then
if [ "x-${userinfo[1]}" == "x-platform-admin" ]; then
admin_user="${userinfo[0]}"
fi
i=${#users[@]}
users[$i]=${userinfo[0]}
fi
done < $tf
if [ "x${admin_user}" == "x" ]; then
echo "Admin user not found. Upgrading failed." >&2
exit -1
fi
# upgrade_jfs ${users[@]}
local selfhosted=$($sh_c "${KUBECTL} get terminus terminus -o jsonpath='{.spec.settings.selfhosted}'")
local domainname=$($sh_c "${KUBECTL} get terminus terminus -o jsonpath='{.spec.settings.domainName}'")
sed -i "s/#__DOMAIN_NAME__/${domainname}/" ${BASE_DIR}/wizard/config/settings/templates/terminus_cr.yaml
sed -i "s/#__SELFHOSTED__/${selfhosted}/" ${BASE_DIR}/wizard/config/settings/templates/terminus_cr.yaml
echo "Upgrading olares system components ... "
gen_settings_values ${admin_user}
ensure_success $sh_c "${HELM} upgrade -i settings ${BASE_DIR}/wizard/config/settings -n default --reuse-values"
# patch
ensure_success $sh_c "${KUBECTL} apply -f ${BASE_DIR}/deploy/patch-globalrole-workspace-manager.yaml"
# ensure_success $sh_c "$KUBECTL apply -f ${BASE_DIR}/deploy/patch-notification-manager.yaml"
# clear apps values.yaml
cat /dev/null > ${BASE_DIR}/wizard/config/apps/values.yaml
cat /dev/null > ${BASE_DIR}/wizard/config/launcher/values.yaml
local appservice_pod=$(get_appservice_pod)
local copy_charts=("launcher" "apps")
for cc in ${copy_charts[@]}; do
ensure_success $sh_c "${KUBECTL} cp ${BASE_DIR}/wizard/config/${cc} os-system/${appservice_pod}:/userapps"
done
local ks_redis_pwd=$($sh_c "${KUBECTL} get secret -n kubesphere-system redis-secret -o jsonpath='{.data.auth}' |base64 -d")
for user in ${users[@]}; do
echo "Upgrading user ${user} ... "
gen_bfl_values ${user}
# gen bfl app key and secret
bfl_ks=($(get_app_key_secret ${user} "bfl"))
# install launcher , and init pv
ensure_success $sh_c "${HELM} upgrade -i launcher-${user} ${BASE_DIR}/wizard/config/launcher -n user-space-${user} --set bfl.appKey=${bfl_ks[0]} --set bfl.appSecret=${bfl_ks[1]} -f ${BASE_DIR}/wizard/config/launcher/values.yaml --reuse-values"
gen_app_values ${user}
close_apps ${user}
for appdir in "${BASE_DIR}/wizard/config/apps"/*/; do
if [ -d "$appdir" ]; then
releasename=$(basename "$appdir")
# ignore wizard
# FIXME: unintitialized user's wizard should be upgrade
if [ x"${releasename}" == x"wizard" ]; then
continue
fi
if [ "$user" != "$admin_user" ];then
releasename=${releasename}-${user}
fi
ensure_success $sh_c "${HELM} upgrade -i ${releasename} ${appdir} -n user-space-${user} --reuse-values --set kubesphere.redis_password=${ks_redis_pwd} -f ${BASE_DIR}/wizard/config/apps/values.yaml"
fi
done
done
# upgrade app service in the last. keep app service online longer
local terminus_is_cloud_version=$($sh_c "${KUBECTL} get cm -n os-system backup-config -o jsonpath='{.data.terminus-is-cloud-version}'")
local backup_cluster_bucket=$($sh_c "${KUBECTL} get cm -n os-system backup-config -o jsonpath='{.data.backup-cluster-bucket}'")
local backup_key_prefix=$($sh_c "${KUBECTL} get cm -n os-system backup-config -o jsonpath='{.data.backup-key-prefix}'")
local backup_secret=$($sh_c "${KUBECTL} get cm -n os-system backup-config -o jsonpath='{.data.backup-secret}'")
local backup_server_data=$($sh_c "${KUBECTL} get cm -n os-system backup-config -o jsonpath='{.data.backup-server-data}'")
ensure_success $sh_c "${HELM} upgrade -i system ${BASE_DIR}/wizard/config/system -n os-system --reuse-values \
--set kubesphere.redis_password=${ks_redis_pwd} --set backup.bucket=\"${backup_cluster_bucket}\" \
--set backup.key_prefix=\"${backup_key_prefix}\" --set backup.is_cloud_version=\"${terminus_is_cloud_version}\" \
--set backup.sync_secret=\"${backup_secret}\""
echo 'Waiting for App-Service ...'
sleep 2 # wait for controller reconiling
check_appservice
echo
echo 'Waiting for Vault ...'
check_vault ${admin_user}
echo
echo 'Starting BFL ...'
check_bfl ${admin_user}
echo
echo 'Starting files ...'
check_fileserver
check_filesfe ${admin_user}
echo
echo 'Starting Desktop ...'
check_desktop ${admin_user}
echo
}
echo "Start to upgrade olares ... "
upgrade_terminus
echo -e "\e[91m Success to upgrade olares.\e[0m Open your new desktop in the browser and have fun !"

View File

@@ -6,7 +6,7 @@ metadata:
annotations:
iam.kubesphere.io/uninitialized: "true"
helm.sh/resource-policy: keep
bytetrade.io/owner-role: platform-admin
bytetrade.io/owner-role: owner
bytetrade.io/terminus-name: "{{.Values.user.terminus_name}}"
bytetrade.io/launcher-auth-policy: two_factor
bytetrade.io/launcher-access-level: "1"
@@ -23,4 +23,4 @@ spec:
groups:
- lldap_admin
status:
state: Active
state: Created

View File

@@ -5,7 +5,7 @@ metadata:
spec:
lldap:
name: ldap
url: "http://lldap-service.os-system:17170"
url: "http://lldap-service.os-platform:17170"
userBlacklist:
- admin
- terminus
@@ -15,4 +15,4 @@ spec:
credentialsSecret:
kind: Secret
name: lldap-credentials
namespace: os-system
namespace: os-platform

View File

@@ -60,3 +60,29 @@ Create the name of the service account to use
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
{{- define "opentelemetry-operator.fullname" -}}
{{- "otel-opentelemetry-operator" }}
{{- end }}
{{- define "opentelemetry-operator.WebhookCert" -}}
{{- $caCertEnc := "" }}
{{- $certCrtEnc := "" }}
{{- $certKeyEnc := "" }}
{{- $prevSecret := (lookup "v1" "Secret" .Release.Namespace (printf "%s-controller-manager-service-cert" (include "opentelemetry-operator.fullname" .) )) }}
{{- if $prevSecret }}
{{- $certCrtEnc = index $prevSecret "data" "tls.crt" }}
{{- $certKeyEnc = index $prevSecret "data" "tls.key" }}
{{- $caCertEnc = index $prevSecret "data" "ca.crt" }}
{{- else }}
{{- $altNames := list ( printf "%s-webhook.%s" (include "opentelemetry-operator.fullname" .) .Release.Namespace ) ( printf "%s-webhook.%s.svc" (include "opentelemetry-operator.fullname" .) .Release.Namespace ) -}}
{{- $tmpperioddays := 3650 }}
{{- $ca := genCA "opentelemetry-operator-operator-ca" $tmpperioddays }}
{{- $cert := genSignedCert (include "opentelemetry-operator.fullname" .) nil $altNames $tmpperioddays $ca }}
{{- $certCrtEnc = b64enc $cert.Cert }}
{{- $certKeyEnc = b64enc $cert.Key }}
{{- $caCertEnc = b64enc $ca.Cert }}
{{- end }}
{{- $result := dict "crt" $certCrtEnc "key" $certKeyEnc "ca" $caCertEnc }}
{{- $result | toYaml }}
{{- end }}

View File

@@ -4,17 +4,31 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: os-system
namespace: os-platform
name: os-internal
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: os-framework
name: os-internal
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: os-network
name: os-network-internal
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: os-internal-rb
name: os-platform:os-internal-rb
subjects:
- kind: ServiceAccount
namespace: os-system
namespace: os-platform
name: os-internal
roleRef:
# kind: Role
@@ -22,6 +36,36 @@ roleRef:
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: os-framework:os-internal-rb
subjects:
- kind: ServiceAccount
namespace: os-framework
name: os-internal
roleRef:
# kind: Role
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: os-network:os-network-rb
subjects:
- kind: ServiceAccount
namespace: os-network
name: os-network-internal
roleRef:
# kind: Role
kind: ClusterRole
name: l4-proxy-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
@@ -194,4 +238,21 @@ rules:
- update
- patch
- delete
- deletecollection
- deletecollection
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: l4-proxy-role
rules:
- apiGroups:
- '*'
resources:
- users
- applications
verbs:
- get
- list
- watch

View File

@@ -1,5 +1,5 @@
---
apiVersion: v1
kind: Namespace
metadata:
@@ -7,4 +7,26 @@ metadata:
kubesphere.io/creator: '{{ .Values.user.name }}'
labels:
kubesphere.io/workspace: system-workspace
name: os-system
name: os-network
---
apiVersion: v1
kind: Namespace
metadata:
annotations:
kubesphere.io/creator: '{{ .Values.user.name }}'
labels:
kubesphere.io/workspace: system-workspace
name: os-platform
---
apiVersion: v1
kind: Namespace
metadata:
annotations:
kubesphere.io/creator: '{{ .Values.user.name }}'
labels:
kubesphere.io/workspace: system-workspace
name: os-framework

View File

@@ -24,6 +24,7 @@ cp ${BASE_DIR}/.dependencies/components ${BASE_DIR}/.manifest/.
cp ${BASE_DIR}/.dependencies/components ${BASE_DIR}/.manifest/.
pushd ${BASE_DIR}.manifest
bash ${BASE_DIR}/build-manifest.sh ${BASE_DIR}/../.manifest/installation.manifest
python3 ${BASE_DIR}/build-manifest.py ${BASE_DIR}/../.manifest/installation.manifest
popd

162
build/build-manifest.py Normal file
View File

@@ -0,0 +1,162 @@
#!/usr/bin/env python3
import argparse
import hashlib
import os
import requests
import sys
import json
CDN_URL = "https://dc3p1870nn3cj.cloudfront.net"
def download_checksum(name):
"""Downloads the checksum for a given name."""
url = f"{CDN_URL}/{name}.checksum.txt"
try:
response = requests.get(url)
response.raise_for_status()
return response.text.split()[0]
except requests.exceptions.RequestException as e:
print(f"Error getting checksum for {name} from {url}: {e}", file=sys.stderr)
sys.exit(1)
def get_image_manifest(name):
"""Downloads the image manifest for a given name."""
url = f"{CDN_URL}/{name}.manifest.json"
try:
response = requests.get(url)
response.raise_for_status()
return response.json()
except requests.exceptions.RequestException as e:
print(f"Error getting manifest for {name} from {url}: {e}", file=sys.stderr)
sys.exit(1)
def main():
"""Main function."""
parser = argparse.ArgumentParser()
parser.add_argument("manifest_file", help="The manifest file to write to.")
args = parser.parse_args()
manifest_file = args.manifest_file
version = os.environ.get("VERSION", "")
repo_path = os.environ.get("REPO_PATH", "/")
manifest_amd64_data = {}
manifest_arm64_data = {}
# Process components
try:
with open("components", "r") as f:
for line in f:
line = line.strip()
if not line:
continue
# Replace version
if version:
line = line.replace("#__VERSION__", version)
# Replace repo path
if repo_path:
line = line.replace("#__REPO_PATH__", repo_path)
fields = line.split(",")
if len(fields) < 5:
print(f"Format error in components file: {line}", file=sys.stderr)
sys.exit(1)
filename, path, deps, _, fileid = fields[:5]
print(f"Downloading file checksum for {filename}")
name = hashlib.md5(filename.encode()).hexdigest()
url_amd64 = name
url_arm64 = f"arm64/{name}"
checksum_amd64 = download_checksum(url_amd64)
checksum_arm64 = download_checksum(url_arm64)
manifest_amd64_data[filename] = {
"type": "component",
"path": path,
"deps": deps,
"url_amd64": url_amd64,
"checksum_amd64": checksum_amd64,
"fileid": fileid
}
manifest_arm64_data[filename] = {
"type": "component",
"path": path,
"deps": deps,
"url_arm64": url_arm64,
"checksum_arm64": checksum_arm64,
"fileid": fileid
}
except FileNotFoundError:
print("Error: 'components' file not found.", file=sys.stderr)
sys.exit(1)
# Process images
path = "images"
for deps_file in ["images.mf"]:
try:
with open(deps_file, "r") as f:
for line in f:
line = line.strip()
if not line:
continue
print(f"Downloading file checksum for {line}")
name = hashlib.md5(line.encode()).hexdigest()
url_amd64 = f"{name}.tar.gz"
url_arm64 = f"arm64/{name}.tar.gz"
checksum_amd64 = download_checksum(name)
checksum_arm64 = download_checksum(f"arm64/{name}")
# Get the image manifest
image_manifest_amd64 = get_image_manifest(name)
image_manifest_arm64 = get_image_manifest(f"arm64/{name}")
filename = f"{name}.tar.gz"
manifest_amd64_data[filename] = {
"type": "image",
"path": path,
"deps": deps_file,
"url_amd64": url_amd64,
"checksum_amd64": checksum_amd64,
"fileid": line,
"manifest": image_manifest_amd64
}
manifest_arm64_data[filename] = {
"type": "image",
"path": path,
"deps": deps_file,
"url_arm64": url_arm64,
"checksum_arm64": checksum_arm64,
"fileid": line,
"manifest": image_manifest_arm64
}
except FileNotFoundError:
print(f"Warning: '{deps_file}' not found, skipping.", file=sys.stderr)
sys.exit(1)
# Write the manifest file
amd64_manifest_file = f"{manifest_file}.amd64"
with open(amd64_manifest_file, "w") as mf:
json.dump(manifest_amd64_data, mf, indent=2)
arm64_manifest_file = f"{manifest_file}.arm64"
with open(arm64_manifest_file, "w") as mf:
json.dump(manifest_arm64_data, mf, indent=2)
# TODO: compress the manifest files
if __name__ == "__main__":
main()

View File

@@ -46,6 +46,9 @@ while read line; do
done < components
sed -i "s/#__VERSION__/${VERSION}/g" $manifest_file
path="${REPO_PATH:-/}"
sed -i "s|#__REPO_PATH__|${path}|g" $manifest_file
path="images"
for deps in "images.mf"; do
while read line; do

View File

@@ -16,6 +16,7 @@ rm -rf ${BASE_DIR}/../.dependencies
set -e
pushd ${BASE_DIR}/../.manifest
bash ${BASE_DIR}/build-manifest.sh ${BASE_DIR}/../.manifest/installation.manifest
python3 ${BASE_DIR}/build-manifest.py ${BASE_DIR}/../.manifest/installation.manifest
popd
pushd $DIST_PATH

View File

@@ -77,3 +77,5 @@ find $BASE_DIR/../ -type f -name Olares.yaml | while read f; do
done
sed -i "s/#__VERSION__/${VERSION}/g" ${manifest}
path="${REPO_PATH:-/}"
sed -i "s|#__REPO_PATH__|${path}|g" ${manifest}

200
build/get-manifest.py Normal file
View File

@@ -0,0 +1,200 @@
#!/usr/bin/env python3
import requests
import json
import argparse
import re
import sys
import platform
def parse_image_name(image_name):
"""
Parses a full image name into registry, repository, and reference (tag/digest).
Handles defaults for Docker Hub.
"""
# Default to 'latest' tag if no tag or digest is specified
if ":" not in image_name and "@" not in image_name:
image_name += ":latest"
# Split repository from reference (tag or digest)
if "@" in image_name:
repo_part, reference = image_name.rsplit("@", 1)
else:
repo_part, reference = image_name.rsplit(":", 1)
# Determine registry and repository
if "/" not in repo_part:
# This is an official Docker Hub image, e.g., "ubuntu"
registry = "registry-1.docker.io"
repository = f"library/{repo_part}"
else:
parts = repo_part.split("/")
# If the first part looks like a domain name, it's the registry
if "." in parts[0] or ":" in parts[0]:
registry = parts[0]
repository = "/".join(parts[1:])
else:
# A scoped Docker Hub image, e.g., "bitnami/nginx"
registry = "registry-1.docker.io"
repository = repo_part
return registry, repository, reference
def get_auth_token(registry, repository):
"""
Gets an authentication token from the registry's auth service.
"""
# First, probe the registry to get the auth challenge
try:
probe_url = f"https://{registry}/v2/"
response = requests.get(probe_url, timeout=10)
except requests.exceptions.RequestException as e:
print(f"Error: Could not connect to registry at {probe_url}. Details: {e}", file=sys.stderr)
sys.exit(1)
if response.status_code != 401:
# Either public or something is wrong, we can try without a token
return None
auth_header = response.headers.get("Www-Authenticate")
if not auth_header:
print(f"Error: Registry {registry} returned 401 but did not provide Www-Authenticate header.", file=sys.stderr)
sys.exit(1)
# Parse the Www-Authenticate header to find realm, service, and scope
try:
realm = re.search('realm="([^"]+)"', auth_header).group(1)
service = re.search('service="([^"]+)"', auth_header).group(1)
# Scope for the specific repository is needed
scope = f"repository:{repository}:pull"
except AttributeError:
print(f"Error: Could not parse Www-Authenticate header: {auth_header}", file=sys.stderr)
sys.exit(1)
# Request the actual token from the auth realm
auth_params = {
"service": service,
"scope": scope
}
try:
auth_response = requests.get(realm, params=auth_params, timeout=10)
auth_response.raise_for_status()
return auth_response.json().get("token")
except requests.exceptions.RequestException as e:
print(f"Error: Failed to get auth token from {realm}. Details: {e}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError:
print(f"Error: Failed to decode JSON response from auth server: {auth_response.text}", file=sys.stderr)
sys.exit(1)
def get_manifest(registry, repository, reference, token):
"""
Fetches the image manifest from the registry.
"""
manifest_url = f"https://{registry}/v2/{repository}/manifests/{reference}"
headers = {
# Request multiple manifest types, the registry will return the correct one
"Accept": "application/vnd.oci.image.index.v1+json, application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json"
}
if token:
headers["Authorization"] = f"Bearer {token}"
try:
response = requests.get(manifest_url, headers=headers, timeout=10)
response.raise_for_status()
return response.json()
except requests.exceptions.HTTPError as e:
if e.response.status_code == 401 and not token:
print("Error: Received 401 Unauthorized. Attempting to get a token...", file=sys.stderr)
# The initial probe might have passed, but manifest access requires auth.
# We re-run the token acquisition logic.
new_token = get_auth_token(registry, repository)
if new_token:
return get_manifest(registry, repository, reference, new_token)
print(f"Error: Failed to fetch manifest from {manifest_url}. Status: {e.response.status_code}", file=sys.stderr)
print(f"Response: {e.response.text}", file=sys.stderr)
sys.exit(1)
except requests.exceptions.RequestException as e:
print(f"Error: A network error occurred. Details: {e}", file=sys.stderr)
sys.exit(1)
def main():
parser = argparse.ArgumentParser(
description="Fetch an OCI/Docker image manifest from a container registry.",
epilog="""Examples:
python get_manifest.py ubuntu:22.04
python get_manifest.py quay.io/brancz/kube-rbac-proxy:v0.18.1 -o manifest.json
python get_manifest.py gcr.io/google-containers/pause:3.9""",
formatter_class=argparse.RawTextHelpFormatter
)
parser.add_argument("image_name", help="Full name of the container image (e.g., 'ubuntu:latest' or 'quay.io/prometheus/node-exporter:v1.7.0')")
parser.add_argument("-o", "--output-file", help="Optional. Path to write the final manifest JSON to. If not provided, prints to stdout.")
args = parser.parse_args()
registry, repository, reference = parse_image_name(args.image_name)
# Suppress informational prints if writing to a file
verbose_print = print if not args.output_file else lambda *a, **k: None
verbose_print(f"Registry: {registry}")
verbose_print(f"Repository: {repository}")
verbose_print(f"Reference: {reference}", end='\n\n', flush=True)
token = get_auth_token(registry, repository)
if not token and not args.output_file:
print("No authentication token needed or could be retrieved. Proceeding without token...", file=sys.stderr)
manifest = get_manifest(registry, repository, reference, token)
final_manifest = None
media_type = manifest.get("mediaType", "")
if "manifest.list" in media_type or "image.index" in media_type:
verbose_print("Detected a multi-platform image index. Finding manifest for current architecture...")
system_arch = platform.machine()
arch_map = {"x86_64": "amd64", "aarch64": "arm64"}
target_arch = arch_map.get(system_arch, system_arch)
verbose_print(f"System architecture: {system_arch} -> Target: linux/{target_arch}")
target_digest = None
for m in manifest.get("manifests", []):
plat = m.get("platform", {})
if plat.get("os") == "linux" and plat.get("architecture") == target_arch:
target_digest = m.get("digest")
break
if target_digest:
verbose_print(f"Found manifest for linux/{target_arch} with digest: {target_digest}\n")
final_manifest = get_manifest(registry, repository, target_digest, token)
else:
print(f"Error: Could not find a manifest for 'linux/{target_arch}' in the index.", file=sys.stderr)
if not args.output_file:
print("Available platforms:", file=sys.stderr)
for m in manifest.get("manifests", []):
print(f" - {m.get('platform', {}).get('os')}/{m.get('platform', {}).get('architecture')}", file=sys.stderr)
sys.exit(1)
else:
final_manifest = manifest
if final_manifest:
if args.output_file:
try:
with open(args.output_file, 'w') as f:
json.dump(final_manifest, f, indent=2)
print(f"Successfully wrote manifest to {args.output_file}")
except IOError as e:
print(f"Error: Could not write to file {args.output_file}. Details: {e}", file=sys.stderr)
sys.exit(1)
else:
print(json.dumps(final_manifest, indent=2))
if __name__ == "__main__":
main()

View File

@@ -21,19 +21,26 @@ if [ ! -d ${DIST} ]; then
mkdir -p ${DIST}
cp -rf ${BUILD_TEMPLATE}/* ${DIST}/.
cp -rf ${BUILD_TEMPLATE}/.env ${DIST}/.
cp -rf ${BUILD_TEMPLATE}/wizard/config/os-chart-template ${DIST}/wizard/config/os-framework
cp -rf ${BUILD_TEMPLATE}/wizard/config/os-chart-template ${DIST}/wizard/config/os-platform
rm -rf ${DIST}/wizard/config/os-chart-template
fi
APP_DIST=${DIST}/wizard/config/apps
SYSTEM_DIST=${DIST}/wizard/config/system/templates
SETTINGS_DIST=${DIST}/wizard/config/settings/templates
CRD_DIST=${SETTINGS_DIST}/crds
DEPLOY_DIST=${SYSTEM_DIST}/deploy
mkdir -p ${APP_DIST}
mkdir -p ${CRD_DIST}
mkdir -p ${DEPLOY_DIST}
for mod in "${PACKAGE_MODULE[@]}";do
echo "packaging ${mod} ..."
SYSTEM_DIST=${DIST}/wizard/config/os-framework/templates
if [ ${mod} == "platform" ]; then
SYSTEM_DIST=${DIST}/wizard/config/os-platform/templates
fi
DEPLOY_DIST=${SYSTEM_DIST}/deploy
mkdir -p ${DEPLOY_DIST}
find ${mod} -type d -name .olares | while read app; do
# package user app charts to install wizard
@@ -67,6 +74,6 @@ echo "packaging launcher ..."
run_cmd "cp -rf framework/bfl/.olares/config/launcher ${DIST}/wizard/config/"
echo "packaging gpu ..."
run_cmd "cp -rf framework/gpu/.olares/config/gpu ${DIST}/wizard/config/"
run_cmd "cp -rf infrastructure/gpu/.olares/config/gpu ${DIST}/wizard/config/"
echo "packaging completed"

View File

@@ -23,26 +23,28 @@ while read line; do
continue
fi
bash ${BASE_DIR}/download-deps.sh $PLATFORM $line
if [ $? -ne 0 ]; then
exit -1
fi
filename=$(echo "$line"|awk -F"," '{print $1}')
echo "if exists $filename ... "
name=$(echo -n "$filename"|md5sum|awk '{print $1}')
checksum="$name.checksum.txt"
md5sum $name > $checksum
backup_file=$(awk '{print $1}' $checksum)
if [ x"$backup_file" == x"" ]; then
echo "invalid checksum"
exit 1
fi
echo "if exists $filename ... "
curl -fsSLI https://dc3p1870nn3cj.cloudfront.net/$path$name > /dev/null
if [ $? -ne 0 ]; then
code=$(curl -o /dev/null -fsSLI -w "%{http_code}" https://dc3p1870nn3cj.cloudfront.net/$path$name.tar.gz)
code=$(curl -o /dev/null -fsSLI -w "%{http_code}" https://dc3p1870nn3cj.cloudfront.net/$path$name)
if [ $code -eq 403 ]; then
bash ${BASE_DIR}/download-deps.sh $PLATFORM $line
if [ $? -ne 0 ]; then
exit -1
fi
md5sum $name > $checksum
backup_file=$(awk '{print $1}' $checksum)
if [ x"$backup_file" == x"" ]; then
echo "invalid checksum"
exit 1
fi
set -ex
aws s3 cp $name s3://terminus-os-install/$path$name --acl=public-read
aws s3 cp $name s3://terminus-os-install/backup/$path$backup_file --acl=public-read

View File

@@ -10,6 +10,7 @@ cat $1|while read image; do
echo "if exists $image ... "
name=$(echo -n "$image"|md5sum|awk '{print $1}')
checksum="$name.checksum.txt"
manifest="$name.manifest.json"
curl -fsSLI https://dc3p1870nn3cj.cloudfront.net/$path$name.tar.gz > /dev/null
if [ $? -ne 0 ]; then
@@ -68,48 +69,29 @@ cat $1|while read image; do
set +ex
else
if [ $code -ne 200 ]; then
echo "failed to check image"
echo "failed to check image checksum"
exit -1
fi
fi
fi
# upload to tencent cloud cos
# curl -fsSLI https://cdn.joinolares.cn/$path$name.tar.gz > /dev/null
# if [ $? -ne 0 ]; then
# set -e
# docker pull $image
# docker save $image -o $name.tar
# gzip $name.tar
# md5sum $name.tar.gz > $checksum
# coscmd upload ./$name.tar.gz /$path$name.tar.gz
# coscmd upload ./$checksum /$path$checksum
# echo "upload $name to cos completed"
# set +e
# fi
# # re-upload checksum.txt
# curl -fsSLI https://cdn.joinolares.cn/$path$checksum > /dev/null
# if [ $? -ne 0 ]; then
# set -e
# docker pull $image
# docker save $image -o $name.tar
# gzip $name.tar
# md5sum $name.tar.gz > $checksum
# coscmd upload ./$name.tar.gz /$path$name.tar.gz
# coscmd upload ./$checksum /$path$checksum
# echo "upload $name to cos completed"
# set +e
# fi
# upload manifest.json
curl -fsSLI https://dc3p1870nn3cj.cloudfront.net/$path$manifest > /dev/null
if [ $? -ne 0 ]; then
code=$(curl -o /dev/null -fsSLI -w "%{http_code}" https://dc3p1870nn3cj.cloudfront.net/$path$manifest)
if [ $code -eq 403 ]; then
set -ex
BASE_DIR=$(dirname $(realpath -s $0))
python3 $BASE_DIR/get-manifest.py $image -o $manifest
aws s3 cp $manifest s3://terminus-os-install/$path$manifest --acl=public-read
echo "upload $name manifest completed"
set +ex
else
if [ $code -ne 200 ]; then
echo "failed to check image manifest"
exit -1
fi
fi
fi
done

View File

@@ -15,12 +15,14 @@ builds:
goarm:
- 7
ignore:
- goos: linux
goarch: arm64
- goos: darwin
goarch: arm
- goos: darwin
goarch: amd64
- goos: windows
goarch: arm
- goos: windows
goarch: arm64
ldflags:
- -s
- -w
@@ -29,10 +31,6 @@ dist: ./output
archives:
- id: olares-cli
name_template: "{{ .ProjectName }}-v{{ .Version }}_{{ .Os }}_{{ .Arch }}"
replacements:
linux: linux
amd64: amd64
arm: arm64
checksum:
name_template: "checksums.txt"
release:

View File

@@ -1 +1,92 @@
# installer
# Olares CLI
This directory contains the code for **olares-cli**, the official command-line interface for administering an **Olares** cluster. It provides a modular, pipeline-based architecture for orchestrating complex system operations. See the full [Olares CLI Documentation](https://docs.olares.com/developer/install/cli-1.12/olares-cli.html) for command reference and tutorials.
Key responsibilities include:
- **Cluster management**: Installing, upgrading, restarting, and maintaining an Olares cluster.
- **Node management**: Adding to or removing nodes from an Olares cluster.
## Execution Model
For most of the commands, `olares-cli` is executed through a four-tier hierarchy:
```
Pipeline ➜ Module ➜ Task ➜ Action
````
### Example: `install-olares` Pipeline
```text
Pipeline: Install Olares
├── ...other modules
└── Module: Bootstrap OS
├── ...other tasks
├── Task: Check Prerequisites
│ └── Action: run-precheck.sh
└── Task: Configure System
└── Action: apply-sysctl
````
## Repository layout
```text
cli/
├── cmd/ # Cobra command definitions
│ ├── main.go # CLI entry point
│ └── ctl/
│ ├── root.go
│ ├── os/ # OS-level maintenance commands
│ ├── node/ # Cluster node operations
│ └── gpu/ # GPU management
└── pkg/
├── core/
│ ├── action/ # Re-usable action primitives
│ ├── module/ # Module abstractions
│ ├── pipeline/ # Pipeline abstractions
│ └── task/ # Task abstractions
└── pipelines/ # Pre-built pipelines
│ ├── ... # actual modules and tasks for various commands and components
```
## Build from source
### Prerequisites
* **Go 1.24+**
* **GoReleaser** (optional, for cross-compiling and packaging)
### Sample commands
```bash
# Clone the repo and enter the CLI folder
cd cli
# 1) Build for the host OS/ARCH
go build -o olares-cli ./cmd/main.go
# 2) Cross-compile for Linux amd64 (from macOS, for example)
GOOS=linux GOARCH=amd64 go build -o olares-cli ./cmd/main.go
# 3) Produce multi-platform artifacts (tar.gz, checksums, etc.)
goreleaser release --snapshot --clean
```
---
## Development workflow
### Add a new command
1. Create the command file in `cmd/ctl/<category>/`.
2. Define a pipeline in `pkg/pipelines/`.
3. Implement modules & tasks inside the relevant `pkg/` sub-packages.
### Test your build
1. Upload the self-built `olares-cli` binary to a machine that's running Olares.
2. Replace the existing `olares-cli` binary on the machine using `sudo cp -f olares-cli /usr/local/bin`.
3. Execute arbitrary commands using `olares-cli`

View File

@@ -60,7 +60,7 @@ echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-arptables = 1' >> /etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-iptables = 1' >> /etc/sysctl.conf
echo 'net.ipv4.ip_local_reserved_ports = 30000-32767' >> /etc/sysctl.conf
echo 'net.ipv4.ip_local_reserved_ports = 30000-32767,46800-50000' >> /etc/sysctl.conf
echo 'vm.max_map_count = 262144' >> /etc/sysctl.conf
echo 'fs.inotify.max_user_instances = 524288' >> /etc/sysctl.conf
echo 'kernel.pid_max = 65535' >> /etc/sysctl.conf
@@ -84,7 +84,7 @@ sed -r -i "s@#{0,}?net.ipv4.ip_forward ?= ?(0|1)@net.ipv4.ip_forward = 1@g" /et
sed -r -i "s@#{0,}?net.bridge.bridge-nf-call-arptables ?= ?(0|1)@net.bridge.bridge-nf-call-arptables = 1@g" /etc/sysctl.conf
sed -r -i "s@#{0,}?net.bridge.bridge-nf-call-ip6tables ?= ?(0|1)@net.bridge.bridge-nf-call-ip6tables = 1@g" /etc/sysctl.conf
sed -r -i "s@#{0,}?net.bridge.bridge-nf-call-iptables ?= ?(0|1)@net.bridge.bridge-nf-call-iptables = 1@g" /etc/sysctl.conf
sed -r -i "s@#{0,}?net.ipv4.ip_local_reserved_ports ?= ?([0-9]{1,}-{0,1},{0,1}){1,}@net.ipv4.ip_local_reserved_ports = 30000-32767@g" /etc/sysctl.conf
sed -r -i "s@#{0,}?net.ipv4.ip_local_reserved_ports ?= ?([0-9]{1,}-{0,1},{0,1}){1,}@net.ipv4.ip_local_reserved_ports = 30000-32767,46800-50000@g" /etc/sysctl.conf
sed -r -i "s@#{0,}?vm.max_map_count ?= ?([0-9]{1,})@vm.max_map_count = 262144@g" /etc/sysctl.conf
sed -r -i "s@#{0,}?fs.inotify.max_user_instances ?= ?([0-9]{1,})@fs.inotify.max_user_instances = 524288@g" /etc/sysctl.conf
sed -r -i "s@#{0,}?kernel.pid_max ?= ?([0-9]{1,})@kernel.pid_max = 65535@g" /etc/sysctl.conf

View File

@@ -265,7 +265,7 @@ const (
CacheAppServicePod = "app_service_pod_name"
CacheAppValues = "app_built_in_values"
CacheCountPodsUsingHostIP = "count_pods_using_host_ip"
CacheCountPodsWaitForRecreation = "count_pods_wait_for_recreation"
CacheUpgradeUsers = "upgrade_users"
CacheUpgradeAdminUser = "upgrade_admin_user"

View File

@@ -73,7 +73,6 @@ type Argument struct {
ImagesDir string `json:"images_dir"`
Namespace string `json:"namespace"`
DeleteCRI bool `json:"delete_cri"`
DeleteCache bool `json:"delete_cache"`
Role string `json:"role"`
Type string `json:"type"`
Kubetype string `json:"kube_type"`
@@ -322,10 +321,26 @@ func (a *Argument) SaveReleaseInfo() error {
if a.OlaresVersion == "" {
return errors.New("invalid: empty olares version")
}
releaseInfoMap := map[string]string{
ENV_OLARES_BASE_DIR: a.BaseDir,
ENV_OLARES_VERSION: a.OlaresVersion,
}
if a.User != nil {
releaseInfoMap["OLARES_NAME"] = fmt.Sprintf("%s@%s", a.User.UserName, a.User.DomainName)
} else {
if util.IsExist(OlaresReleaseFile) {
// if the user is not set, try to load the user name from the release file
envs, err := godotenv.Read(OlaresReleaseFile)
if err == nil {
if userName, ok := envs["OLARES_NAME"]; ok {
releaseInfoMap["OLARES_NAME"] = userName
}
}
}
}
if !util.IsExist(filepath.Dir(OlaresReleaseFile)) {
if err := os.MkdirAll(filepath.Dir(OlaresReleaseFile), 0755); err != nil {
return fmt.Errorf("failed to create directory %s: %v", filepath.Dir(OlaresReleaseFile), err)
@@ -395,10 +410,6 @@ func (a *Argument) SetRegistryMirrors(registryMirrors string) {
a.RegistryMirrors = registryMirrors
}
func (a *Argument) SetDeleteCache(deleteCache bool) {
a.DeleteCache = deleteCache
}
func (a *Argument) SetDeleteCRI(deleteCRI bool) {
a.DeleteCRI = deleteCRI
}

View File

@@ -1,16 +1,16 @@
package common
const (
NamespaceDefault = "default"
NamespaceKubeNodeLease = "kube-node-lease"
NamespaceKubePublic = "kube-public"
NamespaceKubeSystem = "kube-system"
NamespaceKubekeySystem = "kubekey-system"
NamespaceKubesphereControlsSystem = "kubesphere-controls-system"
NamespaceKubesphereMonitoringFederated = "kubesphere-monitoring-federated"
NamespaceKubesphereMonitoringSystem = "kubesphere-monitoring-system"
NamespaceKubesphereSystem = "kubesphere-system"
NamespaceOsSystem = "os-system"
NamespaceDefault = "default"
NamespaceKubeNodeLease = "kube-node-lease"
NamespaceKubePublic = "kube-public"
NamespaceKubeSystem = "kube-system"
NamespaceKubekeySystem = "kubekey-system"
NamespaceKubesphereControlsSystem = "kubesphere-controls-system"
NamespaceKubesphereMonitoringSystem = "kubesphere-monitoring-system"
NamespaceKubesphereSystem = "kubesphere-system"
NamespaceOsFramework = "os-framework"
NamespaceOsPlatform = "os-platform"
ChartNameRedis = "redis"
ChartNameSnapshotController = "snapshot-controller"
@@ -19,6 +19,7 @@ const (
ChartNameKsConfig = "ks-config"
ChartNameMonitorNotification = "monitor-notification"
ChartNameAccount = "account"
ChartNameSystem = "system"
ChartNameOSFramework = "os-framework"
ChartNameOSPlatform = "os-platform"
ChartNameSettings = "settings"
)

View File

@@ -133,8 +133,11 @@ type DisableTerminusdService struct {
}
func (s *DisableTerminusdService) Execute(runtime connector.Runtime) error {
if _, err := runtime.GetRunner().SudoCmd("systemctl disable --now olaresd", false, true); err != nil {
return errors.Wrap(errors.WithStack(err), "disable olaresd failed")
stdout, _ := runtime.GetRunner().SudoCmd("systemctl is-active olaresd", false, false)
if stdout == "active" {
if _, err := runtime.GetRunner().SudoCmd("systemctl disable --now olaresd", false, true); err != nil {
return errors.Wrap(errors.WithStack(err), "disable olaresd failed")
}
}
return nil
}
@@ -144,10 +147,18 @@ type UninstallTerminusd struct {
}
func (r *UninstallTerminusd) Execute(runtime connector.Runtime) error {
var olaresdFiles []string
svcpath := filepath.Join("/etc/systemd/system", templates.TerminusdService.Name())
svcenvpath := filepath.Join("/etc/systemd/system", templates.TerminusdEnv.Name())
if _, err := runtime.GetRunner().SudoCmd(fmt.Sprintf("rm -rf %s && rm -rf %s && rm -rf /usr/local/bin/olaresd", svcpath, svcenvpath), false, false); err != nil {
return errors.Wrap(errors.WithStack(err), "remove olaresd failed")
binPath := "/usr/local/bin/olaresd"
olaresdFiles = append(olaresdFiles, svcpath, svcenvpath, binPath)
for _, pidFile := range []string{"installing.pid", "changingip.pid"} {
olaresdFiles = append(olaresdFiles, filepath.Join(runtime.GetBaseDir(), pidFile))
}
for _, f := range olaresdFiles {
if _, err := runtime.GetRunner().SudoCmd(fmt.Sprintf("rm -rf %s", f), false, false); err != nil {
return errors.Wrap(errors.WithStack(err), "remove olaresd failed")
}
}
return nil
}

View File

@@ -263,30 +263,25 @@ type NodeLabelingModule struct {
func (l *NodeLabelingModule) Init() {
l.Name = "NodeLabeling"
updateNode := &task.RemoteTask{
Name: "UpdateNode",
Hosts: l.Runtime.GetHostsByRole(common.Master),
updateNode := &task.LocalTask{
Name: "UpdateNode",
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(CudaInstalled),
new(K8sNodeInstalled),
},
Action: new(UpdateNodeLabels),
Parallel: false,
Retry: 1,
Action: new(UpdateNodeLabels),
Retry: 1,
}
restartPlugin := &task.RemoteTask{
Name: "RestartPlugin",
Hosts: l.Runtime.GetHostsByRole(common.Master),
restartPlugin := &task.LocalTask{
Name: "RestartPlugin",
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(CudaInstalled),
new(K8sNodeInstalled),
},
Action: new(RestartPlugin),
Parallel: false,
Retry: 1,
Action: new(RestartPlugin),
Retry: 1,
}
l.Tasks = []task.Interface{

View File

@@ -649,7 +649,7 @@ func (t *PrintPluginsStatus) Execute(runtime connector.Runtime) error {
}
}
gpuScheduler, err := client.Kubernetes().CoreV1().Pods("gpu-system").List(context.Background(), metav1.ListOptions{LabelSelector: "name=gpu-scheduler"})
gpuScheduler, err := client.Kubernetes().CoreV1().Pods("os-gpu").List(context.Background(), metav1.ListOptions{LabelSelector: "name=gpu-scheduler"})
if err != nil {
logger.Error("get gpu-scheduler status error, ", err)
}
@@ -676,7 +676,7 @@ func (t *RestartPlugin) Execute(runtime connector.Runtime) error {
return fmt.Errorf("kubectl not found")
}
if _, err := runtime.GetRunner().SudoCmd(fmt.Sprintf("%s rollout restart ds gpu-scheduler -n gpu-system", kubectlpath), false, true); err != nil {
if _, err := runtime.GetRunner().SudoCmd(fmt.Sprintf("%s rollout restart ds gpu-scheduler -n os-gpu", kubectlpath), false, true); err != nil {
return errors.Wrap(errors.WithStack(err), "Failed to restart gpu-scheduler")
}

View File

@@ -195,11 +195,13 @@ func (g *GenerateK3sService) Execute(runtime connector.Runtime) error {
defaultKubeletArs := map[string]string{
"kube-reserved": "cpu=200m,memory=250Mi,ephemeral-storage=1Gi",
"system-reserved": "cpu=200m,memory=250Mi,ephemeral-storage=1Gi",
"eviction-hard": "memory.available<5%,nodefs.available<10%",
"eviction-hard": "memory.available<5%,nodefs.available<10%,imagefs.available<10%",
"config": "/etc/rancher/k3s/kubelet.config",
"containerd": container.DefaultContainerdCRISocket,
"cgroup-driver": "systemd",
"runtime-request-timeout": "5m",
"image-gc-high-threshold": "91",
"image-gc-low-threshold": "90",
}
defaultKubeProxyArgs := map[string]string{
"proxy-mode": "ipvs",

View File

@@ -307,6 +307,8 @@ func GetKubeletConfiguration(runtime connector.Runtime, kubeConf *common.KubeCon
"evictionPressureTransitionPeriod": "30s",
"featureGates": FeatureGatesDefaultConfiguration,
"runtimeRequestTimeout": "5m",
"imageGCHighThresholdPercent": 91,
"imageGCLowThresholdPercent": 90,
}
if securityEnhancement {

View File

@@ -47,24 +47,6 @@ func (m *DeleteKubeSphereCachesModule) Init() {
}
}
type DeleteCacheModule struct {
common.KubeModule
}
func (m *DeleteCacheModule) Init() {
m.Name = "DeleteCache"
deleteCache := &task.LocalTask{
Name: "DeleteCache",
Prepare: new(ShouldDeleteCache),
Action: new(DeleteCache),
}
m.Tasks = []task.Interface{
deleteCache,
}
}
type DeployModule struct {
common.KubeModule
Skip bool

File diff suppressed because one or more lines are too long

View File

@@ -4,8 +4,6 @@
image:
# Overrides the image tag whose default is the chart appVersion.
ks_controller_manager_repo: kubesphere/ks-controller-manager
ks_controller_manager_tag: "v3.3.0"
ks_apiserver_repo: beclab/ks-apiserver
ks_apiserver_tag: "v3.3.0-ext-3"

View File

@@ -32,7 +32,7 @@ spec:
- command:
- ks-apiserver
- --logtostderr=true
image: beclab/ks-apiserver:0.0.18
image: beclab/ks-apiserver:0.0.21
imagePullPolicy: {{ .Values.image.pullPolicy }}
name: ks-apiserver
ports:

View File

@@ -1,121 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ks-controller-manager
tier: backend
version: {{ .Chart.AppVersion }}
name: ks-controller-manager
spec:
strategy:
rollingUpdate:
maxSurge: 0
type: RollingUpdate
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: ks-controller-manager
tier: backend
# version: {{ .Chart.AppVersion }}
template:
metadata:
labels:
app: ks-controller-manager
tier: backend
# version: {{ .Chart.AppVersion }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- command:
- controller-manager
- --logtostderr=true
- --leader-elect=false
image: beclab/ks-controller-manager:0.0.18
imagePullPolicy: {{ .Values.image.pullPolicy }}
name: ks-controller-manager
ports:
- containerPort: 8080
protocol: TCP
resources:
{{- toYaml .Values.controller.resources | nindent 12 }}
volumeMounts:
- mountPath: /etc/kubesphere/
name: kubesphere-config
- mountPath: /etc/localtime
name: host-time
readOnly: true
{{- if .Values.controller.extraVolumeMounts }}
{{- toYaml .Values.controller.extraVolumeMounts | nindent 8 }}
{{- end }}
env:
{{- if .Values.env }}
{{- toYaml .Values.env | nindent 8 }}
{{- end }}
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
serviceAccountName: {{ include "ks-core.serviceAccountName" . }}
terminationGracePeriodSeconds: 30
volumes:
- name: kubesphere-config
configMap:
name: kubesphere-config
defaultMode: 420
- hostPath:
path: /etc/localtime
type: ""
name: host-time
{{- if .Values.controller.extraVolumes }}
{{ toYaml .Values.controller.extraVolumes | nindent 6 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- ks-controller-manager
namespaces:
- kubesphere-system
{{- with .Values.nodeAffinity }}
nodeAffinity:
{{ toYaml . | indent 10 }}
{{- end }}
---
apiVersion: v1
kind: Service
metadata:
labels:
app: ks-controller-manager
tier: backend
version: {{ .Chart.AppVersion }}
name: ks-controller-manager
spec:
ports:
- port: 443
protocol: TCP
targetPort: 8443
selector:
app: ks-controller-manager
tier: backend
# version: {{ .Chart.AppVersion }}
sessionAffinity: None
type: ClusterIP

View File

@@ -4,8 +4,6 @@
image:
# Overrides the image tag whose default is the chart appVersion.
ks_controller_manager_repo: kubesphere/ks-controller-manager
ks_controller_manager_tag: "v3.3.0"
ks_apiserver_repo: beclab/ks-apiserver
ks_apiserver_tag: "v3.3.0-ext-3"

View File

@@ -748,12 +748,12 @@ spec:
sum (node_cpu_seconds_total{job="node-exporter", mode=~"user|nice|system|iowait|irq|softirq"}) by (cpu, instance, job, namespace, pod)
record: node_cpu_used_seconds_total
- expr: |
max(kube_pod_info{job="kube-state-metrics"} * on(node) group_left(role) kube_node_role{job="kube-state-metrics", role="master"} or on(pod, namespace) kube_pod_info{job="kube-state-metrics"}) by (node, namespace, host_ip, role, pod)
max(kube_pod_info{job="kube-state-metrics"} * on(node) group_left(role) kube_node_role{job="kube-state-metrics", role="master"} or on(pod, namespace) kube_pod_info{job="kube-state-metrics"}) by (node, namespace, role, pod)
record: 'node_namespace_pod:kube_pod_info:'
- expr: |
count by (node, host_ip, role) (sum by (node, cpu, host_ip, role) (
count by (node, role) (sum by (node, cpu, role) (
node_cpu_seconds_total{job="node-exporter"}
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:
))
record: node:node_num_cpu:sum
@@ -761,27 +761,27 @@ spec:
avg(irate(node_cpu_used_seconds_total{job="node-exporter"}[5m]))
record: :node_cpu_utilisation:avg1m
- expr: |
avg by (node, host_ip, role) (
avg by (node, role) (
irate(node_cpu_used_seconds_total{job="node-exporter"}[5m])
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:)
record: node:node_cpu_utilisation:avg1m
- expr: |
avg by (node, host_ip, role) (
avg by (node, role) (
irate(node_cpu_seconds_total{job="node-exporter",mode=~"user"}[5m])
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:)
record: node:node_user_cpu_utilisation:avg1m
- expr: |
avg by (node, host_ip, role) (
avg by (node, role) (
irate(node_cpu_seconds_total{job="node-exporter",mode=~"system"}[5m])
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:)
record: node:node_system_cpu_utilisation:avg1m
- expr: |
avg by (node, host_ip, role) (
avg by (node, role) (
irate(node_cpu_seconds_total{job="node-exporter",mode=~"iowait"}[5m])
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:)
record: node:node_iowait_cpu_utilisation:avg1m
- expr: |
@@ -806,9 +806,9 @@ spec:
label_replace(node_memory_Cached_bytes, "node", "$1", "instance", "(.*)")
record: node:node_memory_Cached_bytes
- expr: |
sum by (node, host_ip, role) (
sum by (node, role) (
(node_memory_Slab_bytes{job="node-exporter"} + node_memory_KernelStack_bytes{job="node-exporter"} + node_memory_PageTables_bytes{job="node-exporter"}+ node_memory_HardwareCorrupted_bytes{job="node-exporter"}+node_memory_Bounce_bytes{job="node-exporter"}-node_memory_SReclaimable_bytes{job="node-exporter"})
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:
)
record: node:node_memory_system_reserved
@@ -825,16 +825,16 @@ spec:
sum(node_memory_MemTotal_bytes{job="node-exporter"})
record: ':node_memory_utilisation:'
- expr: |
sum by (node, host_ip, role) (
sum by (node, role) (
(node_memory_MemFree_bytes{job="node-exporter"} + node_memory_Cached_bytes{job="node-exporter"} + node_memory_Buffers_bytes{job="node-exporter"} + node_memory_SReclaimable_bytes{job="node-exporter"})
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:
)
record: node:node_memory_bytes_available:sum
- expr: |
sum by (node, host_ip, role) (
sum by (node, role) (
node_memory_MemTotal_bytes{job="node-exporter"}
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:
)
record: node:node_memory_bytes_total:sum
@@ -842,30 +842,30 @@ spec:
1 - (node:node_memory_bytes_available:sum / node:node_memory_bytes_total:sum)
record: 'node:node_memory_utilisation:'
- expr: |
sum by (node, host_ip, role) (
sum by (node, role) (
irate(node_disk_reads_completed_total{job="node-exporter"}[5m])
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:
)
record: node:data_volume_iops_reads:sum
- expr: |
sum by (node, host_ip, role) (
sum by (node, role) (
irate(node_disk_writes_completed_total{job="node-exporter"}[5m])
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:
)
record: node:data_volume_iops_writes:sum
- expr: |
sum by (node, host_ip, role) (
sum by (node, role) (
irate(node_disk_read_bytes_total{job="node-exporter"}[5m])
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:
)
record: node:data_volume_throughput_bytes_read:sum
- expr: |
sum by (node, host_ip, role) (
sum by (node, role) (
irate(node_disk_written_bytes_total{job="node-exporter"}[5m])
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:
)
record: node:data_volume_throughput_bytes_written:sum
@@ -874,74 +874,74 @@ spec:
sum(irate(node_network_transmit_bytes_total{job="node-exporter",device!~"veth.+"}[5m]))
record: :node_net_utilisation:sum_irate
- expr: |
sum by (node, host_ip, role) (
sum by (node, role) (
(irate(node_network_receive_bytes_total{job="node-exporter",device!~"veth.+"}[5m]) +
irate(node_network_transmit_bytes_total{job="node-exporter",device!~"veth.+"}[5m]))
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:
)
record: node:node_net_utilisation:sum_irate
- expr: |
sum by (node, host_ip, role) (
sum by (node, role) (
irate(node_network_transmit_bytes_total{job="node-exporter",device!~"veth.+"}[5m])
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:
)
record: node:node_net_bytes_transmitted:sum_irate
- expr: |
sum by (node, host_ip, role) (
sum by (node, role) (
irate(node_network_receive_bytes_total{job="node-exporter",device!~"veth.+"}[5m])
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:
)
record: node:node_net_bytes_received:sum_irate
- expr: |
sum by(node, host_ip, role) (sum(max(node_filesystem_files{device=~"/dev/.*", device!~"/dev/loop\\d+", job="node-exporter"}) by (device, pod, namespace)) by (pod, namespace) * on (namespace, pod) group_left(node, host_ip, role) node_namespace_pod:kube_pod_info:)
sum by(node, role) (sum(max(node_filesystem_files{device=~"/dev/.*", device!~"/dev/loop\\d+", job="node-exporter"}) by (device, pod, namespace)) by (pod, namespace) * on (namespace, pod) group_left(node, role) node_namespace_pod:kube_pod_info:)
record: 'node:node_inodes_total:'
- expr: |
sum by(node, host_ip, role) (sum(max(node_filesystem_files_free{device=~"/dev/.*", device!~"/dev/loop\\d+", job="node-exporter"}) by (device, pod, namespace)) by (pod, namespace) * on (namespace, pod) group_left(node, host_ip, role) node_namespace_pod:kube_pod_info:)
sum by(node, role) (sum(max(node_filesystem_files_free{device=~"/dev/.*", device!~"/dev/loop\\d+", job="node-exporter"}) by (device, pod, namespace)) by (pod, namespace) * on (namespace, pod) group_left(node, role) node_namespace_pod:kube_pod_info:)
record: 'node:node_inodes_free:'
- expr: |
sum by (node, host_ip, role) (node_load1{job="node-exporter"} * on (namespace, pod) group_left(node, host_ip, role) node_namespace_pod:kube_pod_info:) / node:node_num_cpu:sum
sum by (node, role) (node_load1{job="node-exporter"} * on (namespace, pod) group_left(node, role) node_namespace_pod:kube_pod_info:) / node:node_num_cpu:sum
record: node:load1:ratio
- expr: |
sum by (node, host_ip, role) (node_load5{job="node-exporter"} * on (namespace, pod) group_left(node, host_ip, role) node_namespace_pod:kube_pod_info:) / node:node_num_cpu:sum
sum by (node, role) (node_load5{job="node-exporter"} * on (namespace, pod) group_left(node, role) node_namespace_pod:kube_pod_info:) / node:node_num_cpu:sum
record: node:load5:ratio
- expr: |
sum by (node, host_ip, role) (node_load15{job="node-exporter"} * on (namespace, pod) group_left(node, host_ip, role) node_namespace_pod:kube_pod_info:) / node:node_num_cpu:sum
sum by (node, role) (node_load15{job="node-exporter"} * on (namespace, pod) group_left(node, role) node_namespace_pod:kube_pod_info:) / node:node_num_cpu:sum
record: node:load15:ratio
- expr: |
sum by (node, host_ip, role) ((kube_pod_status_scheduled{job="kube-state-metrics", condition="true"} > 0) * on (namespace, pod) group_left(node, host_ip, role) node_namespace_pod:kube_pod_info:)
sum by (node, role) ((kube_pod_status_scheduled{job="kube-state-metrics", condition="true"} > 0) * on (namespace, pod) group_left(node, role) node_namespace_pod:kube_pod_info:)
record: node:pod_count:sum
- expr: |
(sum(kube_node_status_capacity{resource="pods", job="kube-state-metrics"}) by (node) * on(node) group_left(host_ip, role) max by(node, host_ip, role) (node_namespace_pod:kube_pod_info:{node!="",host_ip!=""}))
(sum(kube_node_status_capacity{resource="pods", job="kube-state-metrics"}) by (node) * on(node) group_left(role) max by(node, role) (node_namespace_pod:kube_pod_info:{node!=""}))
record: node:pod_capacity:sum
- expr: |
node:pod_running:count / node:pod_capacity:sum
record: node:pod_utilization:ratio
- expr: |
count(node_namespace_pod:kube_pod_info: unless on (pod, namespace) (kube_pod_status_phase{job="kube-state-metrics", phase=~"Failed|Pending|Unknown|Succeeded"} > 0)) by (node, host_ip, role)
count(node_namespace_pod:kube_pod_info: unless on (pod, namespace) (kube_pod_status_phase{job="kube-state-metrics", phase=~"Failed|Pending|Unknown|Succeeded"} > 0)) by (node, role)
record: node:pod_running:count
- expr: |
count(node_namespace_pod:kube_pod_info: unless on (pod, namespace) (kube_pod_status_phase{job="kube-state-metrics", phase=~"Failed|Pending|Unknown|Running"} > 0)) by (node, host_ip, role)
count(node_namespace_pod:kube_pod_info: unless on (pod, namespace) (kube_pod_status_phase{job="kube-state-metrics", phase=~"Failed|Pending|Unknown|Running"} > 0)) by (node, role)
record: node:pod_succeeded:count
- expr: |
count(node_namespace_pod:kube_pod_info:{node!="",host_ip!=""} unless on (pod, namespace) (kube_pod_status_phase{job="kube-state-metrics", phase="Succeeded"}>0) unless on (pod, namespace) ((kube_pod_status_ready{job="kube-state-metrics", condition="true"}>0) and on (pod, namespace) (kube_pod_status_phase{job="kube-state-metrics", phase="Running"}>0)) unless on (pod, namespace) kube_pod_container_status_waiting_reason{job="kube-state-metrics", reason="ContainerCreating"}>0) by (node, host_ip, role)
count(node_namespace_pod:kube_pod_info:{node!=""} unless on (pod, namespace) (kube_pod_status_phase{job="kube-state-metrics", phase="Succeeded"}>0) unless on (pod, namespace) ((kube_pod_status_ready{job="kube-state-metrics", condition="true"}>0) and on (pod, namespace) (kube_pod_status_phase{job="kube-state-metrics", phase="Running"}>0)) unless on (pod, namespace) kube_pod_container_status_waiting_reason{job="kube-state-metrics", reason="ContainerCreating"}>0) by (node, role)
record: node:pod_abnormal:count
- expr: |
(count by(namespace, cluster) (kube_pod_info{job="kube-state-metrics"} unless on(pod, namespace, cluster) (kube_pod_status_phase{job="kube-state-metrics",phase="Succeeded"} > 0) unless on(pod, namespace, cluster) ((kube_pod_status_ready{condition="true",job="kube-state-metrics"} > 0) and on(pod, namespace, cluster) (kube_pod_status_phase{job="kube-state-metrics",phase="Running"} > 0)) unless on(pod, namespace, cluster) kube_pod_container_status_waiting_reason{job="kube-state-metrics",reason="ContainerCreating"} > 0) or on(namespace, cluster) (group by(namespace, cluster) (kube_pod_info{job="kube-state-metrics"}) * 0)) * on(namespace, cluster) group_left(user) (kube_namespace_labels{job="kube-state-metrics"}) > 0
record: user:pod_abnormal:count
- expr: |
node:pod_abnormal:count / count(node_namespace_pod:kube_pod_info:{node!="",host_ip!=""} unless on (pod, namespace) kube_pod_status_phase{job="kube-state-metrics", phase="Succeeded"}>0) by (node, host_ip, role)
node:pod_abnormal:count / count(node_namespace_pod:kube_pod_info:{node!=""} unless on (pod, namespace) kube_pod_status_phase{job="kube-state-metrics", phase="Succeeded"}>0) by (node, role)
record: node:pod_abnormal:ratio
- expr: |
user:pod_abnormal:count / count(node_namespace_pod:kube_pod_info:{node!="",host_ip!=""} unless on (pod, namespace) kube_pod_status_phase{job="kube-state-metrics", phase="Succeeded"}>0) by (node, host_ip, role)
user:pod_abnormal:count / count(node_namespace_pod:kube_pod_info:{node!=""} unless on (pod, namespace) kube_pod_status_phase{job="kube-state-metrics", phase="Succeeded"}>0) by (node, role)
record: user:pod_abnormal:ratio
- expr: |
sum(max(node_filesystem_avail_bytes{device=~"/dev/.*", device!~"/dev/loop\\d+", job="node-exporter"} * on (namespace, pod) group_left(node, host_ip, role) node_namespace_pod:kube_pod_info:) by (device, node, host_ip, role)) by (node, host_ip, role)
sum(max(node_filesystem_avail_bytes{device=~"/dev/.*", device!~"/dev/loop\\d+", job="node-exporter"} * on (namespace, pod) group_left(node, role) node_namespace_pod:kube_pod_info:) by (device, node, role)) by (node, role)
record: 'node:disk_space_available:'
- expr: |
1- sum(max(node_filesystem_avail_bytes{device=~"/dev/.*", device!~"/dev/loop\\d+", job="node-exporter"} * on (namespace, pod) group_left(node, host_ip, role) node_namespace_pod:kube_pod_info:) by (device, node, host_ip, role)) by (node, host_ip, role) / sum(max(node_filesystem_size_bytes{device=~"/dev/.*", device!~"/dev/loop\\d+", job="node-exporter"} * on (namespace, pod) group_left(node, host_ip, role) node_namespace_pod:kube_pod_info:) by (device, node, host_ip, role)) by (node, host_ip, role)
1- sum(max(node_filesystem_avail_bytes{device=~"/dev/.*", device!~"/dev/loop\\d+", job="node-exporter"} * on (namespace, pod) group_left(node, role) node_namespace_pod:kube_pod_info:) by (device, node, role)) by (node, role) / sum(max(node_filesystem_size_bytes{device=~"/dev/.*", device!~"/dev/loop\\d+", job="node-exporter"} * on (namespace, pod) group_left(node, role) node_namespace_pod:kube_pod_info:) by (device, node, role)) by (node, role)
record: node:disk_space_utilization:ratio
- expr: |
(1 - (node:node_inodes_free: / node:node_inodes_total:))
@@ -956,7 +956,7 @@ spec:
(1-node:disk_avail:size/node:disk_capacity:size)
record: node:disk_utilization:ratio
- expr: |
label_replace(label_replace(avg by (instance) (node_hwmon_temp_celsius{chip=~".*"} * on(chip) group_left(chip_name) node_hwmon_chip_names{chip_name="coretemp"}),"metric", "cpu_temperature", "", ""),"node","$1","instance","(.*)")
label_replace(label_replace(avg by (instance) (node_hwmon_temp_celsius{chip=~".*"} * on(chip,instance) group_left(chip_name) node_hwmon_chip_names{chip_name="coretemp"}),"metric", "cpu_temperature", "", ""),"node","$1","instance","(.*)")
record: node:node_cpu_temp_celsius
- expr: |
label_replace(node_network_address_info{job="node-exporter", device!~"lo|docker.*|tailscale.*|veth.*|br.*|bond.*|flannel.*|cni.*|tun.*|virbr.*|vnet.*|cali.*|cilium.*|weave.*|vxlan.*|ovs-system|dummy.*|kube.*"},"node","$1","instance","(.*)")

View File

@@ -42,7 +42,7 @@ spec:
- --collector.netdev.address-info
- --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
- --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
image: beclab/node-exporter:0.0.2
image: beclab/node-exporter:0.0.3
name: node-exporter
securityContext:
privileged: true

View File

@@ -58,12 +58,12 @@ var kscorecrds = []map[string]string{
"resource": "default-http-backend",
"release": "ks-core",
},
{
"ns": "kubesphere-system",
"kind": "secrets",
"resource": "ks-controller-manager-webhook-cert",
"release": "ks-core",
},
//{
// "ns": "kubesphere-system",
// "kind": "secrets",
// "resource": "ks-controller-manager-webhook-cert",
// "release": "ks-core",
//},
{
"ns": "kubesphere-system",
"kind": "serviceaccounts",
@@ -100,24 +100,24 @@ var kscorecrds = []map[string]string{
"resource": "ks-apiserver",
"release": "ks-core",
},
{
"ns": "kubesphere-system",
"kind": "services",
"resource": "ks-controller-manager",
"release": "ks-core",
},
//{
// "ns": "kubesphere-system",
// "kind": "services",
// "resource": "ks-controller-manager",
// "release": "ks-core",
//},
{
"ns": "kubesphere-system",
"kind": "deployments",
"resource": "ks-apiserver",
"release": "ks-core",
},
{
"ns": "kubesphere-system",
"kind": "deployments",
"resource": "ks-controller-manager",
"release": "ks-core",
},
//{
// "ns": "kubesphere-system",
// "kind": "deployments",
// "resource": "ks-controller-manager",
// "release": "ks-core",
//},
//{
// "ns": "kubesphere-system",
// "kind": "validatingwebhookconfigurations",

View File

@@ -65,7 +65,7 @@ func (t *InitNamespace) Execute(runtime connector.Runtime) error {
kubectlpath = path.Join(common.BinDir, common.CommandKubectl)
}
for _, ns := range []string{common.NamespaceKubesphereControlsSystem, common.NamespaceKubesphereMonitoringFederated} {
for _, ns := range []string{common.NamespaceKubesphereControlsSystem} {
if stdout, err := runtime.GetRunner().Cmd(fmt.Sprintf("%s create ns %s", kubectlpath, ns), false, true); err != nil {
if !strings.Contains(stdout, "already exists") {
logger.Errorf("create ns %s failed: %v", ns, err)
@@ -98,8 +98,6 @@ func (t *InitNamespace) Execute(runtime connector.Runtime) error {
common.NamespaceKubeSystem,
common.NamespaceKubekeySystem,
common.NamespaceKubesphereControlsSystem,
common.NamespaceKubesphereMonitoringFederated,
common.NamespaceKubesphereMonitoringSystem,
common.NamespaceKubesphereSystem,
}

View File

@@ -23,17 +23,6 @@ import (
versionutil "k8s.io/apimachinery/pkg/util/version"
)
type ShouldDeleteCache struct {
common.KubePrepare
}
func (p *ShouldDeleteCache) PreCheck(runtime connector.Runtime) (bool, error) {
if p.KubeConf.Arg.DeleteCache {
return true, nil
}
return false, nil
}
type VersionBelowV3 struct {
common.KubePrepare
}

View File

@@ -52,19 +52,6 @@ func (d *DeleteKubeSphereCaches) Execute(runtime connector.Runtime) error {
return nil
}
type DeleteCache struct {
common.KubeAction
}
func (t *DeleteCache) Execute(runtime connector.Runtime) error {
// var cacheDir = path.Join(runtime.GetBaseDir(), cc.ImagesDir)
// if err := util.RemoveDir(cacheDir); err != nil {
// return err
// }
// logger.Debugf("delete caches success")
return nil
}
type AddInstallerConfig struct {
common.KubeAction
}
@@ -368,7 +355,7 @@ func (c *Check) Execute(runtime connector.Runtime) error {
return fmt.Errorf("kubectl not found")
}
var labels = []string{"app=ks-apiserver", "app=ks-controller-manager"}
var labels = []string{"app=ks-apiserver"}
for _, label := range labels {
var cmd = fmt.Sprintf("%s get pod -n %s -l '%s' -o jsonpath='{.items[0].status.phase}'", kubectlpath, common.NamespaceKubesphereSystem, label)

View File

@@ -6,6 +6,7 @@ import (
"github.com/beclab/Olares/cli/pkg/core/logger"
"github.com/beclab/Olares/cli/pkg/core/module"
"github.com/beclab/Olares/cli/pkg/core/pipeline"
"github.com/beclab/Olares/cli/pkg/gpu"
"github.com/beclab/Olares/cli/pkg/k3s"
"github.com/beclab/Olares/cli/pkg/kubernetes"
"github.com/beclab/Olares/cli/pkg/manifest"
@@ -75,6 +76,7 @@ func (m *AddNodeModule) Init() {
&k3s.JoinNodesModule{},
}
}
m.underlyingModules = append(m.underlyingModules, &gpu.NodeLabelingModule{})
for _, underlyingModule := range m.underlyingModules {
underlyingModule.Default(m.Runtime, m.PipelineCache, m.ModuleCache)
underlyingModule.AutoAssert()

View File

@@ -105,6 +105,7 @@ func (p *phaseBuilder) phaseInstall() *phaseBuilder {
&certs.UninstallCertsFilesModule{},
&storage.DeleteUserDataModule{},
&terminus.DeleteWizardFilesModule{},
&terminus.DeleteUpgradeFilesModule{},
&storage.RemoveJuiceFSModule{},
&storage.DeletePhaseFlagModule{
PhaseFile: common.TerminusStateFileInstalled,
@@ -132,33 +133,13 @@ func (p *phaseBuilder) phasePrepare() *phaseBuilder {
PhaseFile: common.TerminusStateFilePrepared,
BaseDir: p.runtime.GetBaseDir(),
},
&daemon.UninstallTerminusdModule{},
&terminus.RemoveReleaseFileModule{},
)
}
return p
}
func (p *phaseBuilder) phaseDownload() *phaseBuilder {
terminusdAction := &daemon.CheckTerminusdService{}
err := terminusdAction.Execute()
if p.convert() >= PhaseDownload {
if err == nil {
p.modules = append(p.modules, &daemon.UninstallTerminusdModule{})
}
p.modules = append(p.modules,
&kubesphere.DeleteCacheModule{},
)
if p.runtime.Arg.DeleteCache {
p.modules = append(p.modules, &storage.DeleteCacheModule{
BaseDir: p.runtime.GetBaseDir(),
})
}
}
return p
}
func (p *phaseBuilder) phaseMacos() {
p.modules = []module.Module{
&precheck.GreetingsModule{},
@@ -168,9 +149,6 @@ func (p *phaseBuilder) phaseMacos() {
}
if p.convert() >= PhaseDownload {
p.modules = append(p.modules, &kubesphere.DeleteKubeSphereCachesModule{})
if p.runtime.Arg.DeleteCache {
p.modules = append(p.modules, &kubesphere.DeleteCacheModule{})
}
}
}
@@ -189,8 +167,7 @@ func UninstallTerminus(phase string, runtime *common.KubeRuntime) pipeline.Pipel
builder.
phaseInstall().
phaseStorage().
phasePrepare().
phaseDownload()
phasePrepare()
}
return pipeline.Pipeline{

View File

@@ -65,6 +65,7 @@ data:
health
ready
kubernetes {{ .DNSDomain }} in-addr.arpa ip6.arpa {
endpoint_pod_names
pods insecure
fallthrough in-addr.arpa ip6.arpa
}

View File

@@ -5993,6 +5993,8 @@ spec:
# Enable or Disable VXLAN on the default IPv6 IP pool.
- name: CALICO_IPV6POOL_VXLAN
value: "Never"
- name: FELIX_HEALTHHOST
value: 127.0.0.1
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
valueFrom:

View File

@@ -30,6 +30,17 @@ func (m *Manager) Package() error {
return err
}
osChartTemplatePath := "wizard/config/os-chart-template"
for _, osm := range []string{"os-platform", "os-framework"} {
if err := util.CopyDirectory(filepath.Join(buildTemplate, osChartTemplatePath), filepath.Join(m.distPath, fmt.Sprintf("/wizard/config/%s", osm))); err != nil {
return err
}
}
if err := util.RemoveDir(filepath.Join(m.distPath, osChartTemplatePath)); err != nil {
return err
}
// Package modules
for _, mod := range modules {
if err := m.packageModule(mod); err != nil {
@@ -50,6 +61,13 @@ func (m *Manager) Package() error {
}
func (m *Manager) packageModule(mod string) error {
var distDeployType string
switch mod {
case "platform":
distDeployType = "os-platform"
case "framework":
distDeployType = "os-framework"
}
modPath := filepath.Join(m.olaresRepoRoot, mod)
err := filepath.Walk(modPath, func(path string, info os.FileInfo, err error) error {
if err != nil {
@@ -78,7 +96,7 @@ func (m *Manager) packageModule(mod string) error {
// Package cluster deployments
deployPath := filepath.Join(path, "config/cluster/deploy")
if err := util.CopyDirectoryIfExists(deployPath, filepath.Join(m.distPath, "wizard/config/system/templates/deploy")); err != nil {
if err := util.CopyDirectoryIfExists(deployPath, filepath.Join(m.distPath, fmt.Sprintf("wizard/config/%s/templates/deploy", distDeployType))); err != nil {
return err
}
@@ -99,7 +117,7 @@ func (m *Manager) packageLauncher() error {
func (m *Manager) packageGPU() error {
fmt.Println("packaging gpu ...")
return util.CopyDirectory(
filepath.Join(m.olaresRepoRoot, "framework/gpu/.olares/config/gpu"),
filepath.Join(m.olaresRepoRoot, "infrastructure/gpu/.olares/config/gpu"),
filepath.Join(m.distPath, "wizard/config/gpu"),
)
}

View File

@@ -2,16 +2,16 @@ package builder
import (
"fmt"
"os"
"path/filepath"
"github.com/beclab/Olares/cli/pkg/core/util"
"github.com/beclab/Olares/cli/pkg/release/app"
"github.com/beclab/Olares/cli/pkg/release/manifest"
"os"
"path/filepath"
)
type Builder struct {
olaresRepoRoot string
vendorRepoPath string
distPath string
version string
manifestManager *manifest.Manager
@@ -20,8 +20,13 @@ type Builder struct {
func NewBuilder(olaresRepoRoot, version, cdnURL string, ignoreMissingImages bool) *Builder {
distPath := filepath.Join(olaresRepoRoot, ".dist/install-wizard")
vendorRepoPath := os.Getenv("OLARES_VENDOR_REPO_PATH")
if vendorRepoPath == "" {
vendorRepoPath = "/"
}
return &Builder{
olaresRepoRoot: olaresRepoRoot,
vendorRepoPath: vendorRepoPath,
distPath: distPath,
version: version,
manifestManager: manifest.NewManager(olaresRepoRoot, distPath, cdnURL, ignoreMissingImages),
@@ -69,6 +74,9 @@ func (b *Builder) archive() (string, error) {
if err := util.ReplaceInFile(file, "#__VERSION__", b.version); err != nil {
return "", err
}
if err := util.ReplaceInFile(file, "#__REPO_PATH__", b.vendorRepoPath); err != nil {
return "", err
}
}
tarFile := filepath.Join(b.olaresRepoRoot, fmt.Sprintf("install-wizard-%s.tar.gz", versionStr))

View File

@@ -4,10 +4,12 @@ import (
"bufio"
"crypto/md5"
"fmt"
"github.com/Masterminds/semver/v3"
dockerref "github.com/containerd/containerd/reference/docker"
"io"
"net/http"
"os"
"os/exec"
"path/filepath"
"regexp"
"sigs.k8s.io/kustomize/kyaml/yaml"
@@ -265,11 +267,55 @@ func (m *Manager) scan() error {
m.extractedImages = sortedImages
for _, component := range uniqueComponents {
component, err = m.patchComponent(component)
if err != nil {
return err
}
m.extractedComponents = append(m.extractedComponents, component)
}
return nil
}
func (m *Manager) getLatestDailyBuildTag() (string, error) {
cmd := exec.Command("git", "tag", "-l")
cmd.Dir = m.olaresRepoRoot
output, err := cmd.CombinedOutput()
if err != nil {
return "", fmt.Errorf("failed to get git tags: %v", err)
}
tags := strings.Split(strings.TrimSpace(string(output)), "\n")
if len(tags) == 0 || (len(tags) == 1 && tags[0] == "") {
return "", fmt.Errorf("no git tags found")
}
var dailyTags []string
dailyBuildRegex := regexp.MustCompile(`^\d+\.\d+\.\d-\d{8}$`)
for _, tag := range tags {
tag = strings.TrimSpace(tag)
if dailyBuildRegex.MatchString(tag) {
dailyTags = append(dailyTags, tag)
}
}
if len(dailyTags) == 0 {
return "", fmt.Errorf("no daily build tags found")
}
sort.Slice(dailyTags, func(i, j int) bool {
iv, err := semver.NewVersion(dailyTags[i])
if err != nil {
return true
}
jv, err := semver.NewVersion(dailyTags[j])
if err != nil {
return false
}
return iv.LessThan(jv)
})
return dailyTags[len(dailyTags)-1], nil
}
// Helper function to patch extracted image name
// before validating it
@@ -298,3 +344,22 @@ func (m *Manager) patchImage(image string) (string, error) {
image = strings.ReplaceAll(image, backupServerImageVersionTpl, backupVersion)
return image, nil
}
func (m *Manager) patchComponent(component BinaryOutput) (BinaryOutput, error) {
if component.ID != "olaresd" {
return component, nil
}
latestDailyBuildTag, err := m.getLatestDailyBuildTag()
if err != nil {
return BinaryOutput{}, fmt.Errorf("failed to get latest daily build tag (required to replace olaresd version): %v", err)
}
fmt.Printf("patching olaresd version to %s\n", latestDailyBuildTag)
component.Name = strings.ReplaceAll(component.Name, "#__VERSION__", latestDailyBuildTag)
component.AMD64 = strings.ReplaceAll(component.AMD64, "#__VERSION__", latestDailyBuildTag)
component.ARM64 = strings.ReplaceAll(component.ARM64, "#__VERSION__", latestDailyBuildTag)
return component, nil
}

View File

@@ -214,29 +214,6 @@ func (m *DeletePhaseFlagModule) Init() {
}
}
type DeleteCacheModule struct {
common.KubeModule
BaseDir string
}
func (m *DeleteCacheModule) Init() {
m.Name = "DeleteCaches"
deleteCaches := &task.RemoteTask{
Name: "DeleteCaches",
Hosts: m.Runtime.GetHostsByRole(common.Master),
Action: &DeleteCaches{
BaseDir: m.BaseDir,
},
Parallel: false,
Retry: 1,
}
m.Tasks = []task.Interface{
deleteCaches,
}
}
type DeleteUserDataModule struct {
common.KubeModule
}

View File

@@ -325,38 +325,6 @@ func (t *DeletePhaseFlagFile) Execute(runtime connector.Runtime) error {
return nil
}
type DeleteCaches struct {
common.KubeAction
BaseDir string
}
func (t *DeleteCaches) Execute(runtime connector.Runtime) error {
var cachesDirs []string
filepath.WalkDir(t.BaseDir, func(path string, d fs.DirEntry, err error) error {
if path != t.BaseDir {
if d.IsDir() {
cachesDirs = append(cachesDirs, path)
return filepath.SkipDir
}
}
return nil
},
)
if cachesDirs != nil && len(cachesDirs) > 0 {
for _, cachesDir := range cachesDirs {
if util.IsExist(cachesDir) {
if err := util.RemoveDir(cachesDir); err != nil {
logger.Errorf("remove %s failed %v", cachesDir, err)
}
}
}
}
return nil
}
type DeleteTerminusUserData struct {
common.KubeAction
}

View File

@@ -218,7 +218,7 @@ func (c *CopyAppServiceHelmFiles) Execute(runtime connector.Runtime) error {
kubeclt, _ := util.GetCommand(common.CommandKubectl)
for _, app := range []string{"launcher", "apps"} {
var cmd = fmt.Sprintf("%s cp %s/wizard/config/%s os-system/%s:/userapps -c app-service", kubeclt, runtime.GetInstallerDir(), app, appServiceName)
var cmd = fmt.Sprintf("%s cp %s/wizard/config/%s os-framework/%s:/userapps -c app-service", kubeclt, runtime.GetInstallerDir(), app, appServiceName)
if _, err = runtime.GetRunner().SudoCmd(cmd, false, true); err != nil {
return errors.Wrap(errors.WithStack(err), "copy files failed")
}
@@ -231,7 +231,7 @@ func getAppServiceName(client clientset.Client, runtime connector.Runtime) (stri
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
pods, err := client.Kubernetes().CoreV1().Pods(common.NamespaceOsSystem).List(ctx, metav1.ListOptions{LabelSelector: "tier=app-service"})
pods, err := client.Kubernetes().CoreV1().Pods(common.NamespaceOsFramework).List(ctx, metav1.ListOptions{LabelSelector: "tier=app-service"})
if err != nil {
return "", errors.Wrap(errors.WithStack(err), "get app-service failed")
}

View File

@@ -199,6 +199,23 @@ func (m *InstalledModule) Init() {
}
}
type DeleteUpgradeFilesModule struct {
common.KubeModule
}
func (d *DeleteUpgradeFilesModule) Init() {
d.Name = "DeleteUpgradeFiles"
deleteUpgradeFiles := &task.LocalTask{
Name: "DeleteUpgradeFiles",
Action: &DeleteUpgradeFiles{},
}
d.Tasks = []task.Interface{
deleteUpgradeFiles,
}
}
type DeleteWizardFilesModule struct {
common.KubeModule
}

View File

@@ -52,7 +52,7 @@ func (t *InstallOsSystem) Execute(runtime connector.Runtime) error {
if err != nil {
return err
}
actionConfig, settings, err := utils.InitConfig(config, common.NamespaceOsSystem)
actionConfig, settings, err := utils.InitConfig(config, common.NamespaceOsPlatform)
if err != nil {
return err
}
@@ -60,7 +60,6 @@ func (t *InstallOsSystem) Execute(runtime connector.Runtime) error {
var ctx, cancel = context.WithTimeout(context.Background(), 3*time.Minute)
defer cancel()
var systemPath = path.Join(runtime.GetInstallerDir(), "wizard", "config", "system")
vals := map[string]interface{}{
"kubesphere": map[string]interface{}{"redis_password": redisPwd},
"backup": map[string]interface{}{
@@ -80,7 +79,21 @@ func (t *InstallOsSystem) Execute(runtime connector.Runtime) error {
vals["sharedlib"] = storage.OlaresSharedLibDir
}
if err := utils.UpgradeCharts(ctx, actionConfig, settings, common.ChartNameSystem, systemPath, "", common.NamespaceOsSystem, vals, false); err != nil {
var platformPath = path.Join(runtime.GetInstallerDir(), "wizard", "config", "os-platform")
if err := utils.UpgradeCharts(ctx, actionConfig, settings, common.ChartNameOSPlatform, platformPath, "", common.NamespaceOsPlatform, vals, false); err != nil {
return err
}
// TODO: wait for the platform to be ready
actionConfig, settings, err = utils.InitConfig(config, common.NamespaceOsFramework)
if err != nil {
return err
}
ctx, cancel = context.WithTimeout(context.Background(), 3*time.Minute)
defer cancel()
var frameworkPath = path.Join(runtime.GetInstallerDir(), "wizard", "config", "os-framework")
if err := utils.UpgradeCharts(ctx, actionConfig, settings, common.ChartNameOSFramework, frameworkPath, "", common.NamespaceOsFramework, vals, false); err != nil {
return err
}
@@ -245,7 +258,7 @@ func (m *InstallOsSystemModule) Init() {
Name: "CheckSystemServiceStatus",
Action: &CheckPodsRunning{
labels: map[string][]string{
"os-system": {"tier=app-service"},
"os-framework": {"tier=app-service"},
},
},
Retry: 20,

View File

@@ -93,7 +93,8 @@ func (t *CheckKeyPodsRunning) Execute(runtime connector.Runtime) error {
}
if strings.HasPrefix(pod.Namespace, "user-space") ||
strings.HasPrefix(pod.Namespace, "user-system") ||
pod.Namespace == "os-system" {
pod.Namespace == "os-platform" ||
pod.Namespace == "os-framework" {
if pod.Status.Phase != corev1.PodRunning {
return fmt.Errorf("pod %s/%s is not running", pod.Namespace, pod.Name)
}
@@ -295,6 +296,30 @@ func (t *InstallFinished) Execute(runtime connector.Runtime) error {
return nil
}
type DeleteUpgradeFiles struct {
common.KubeAction
}
func (d *DeleteUpgradeFiles) Execute(runtime connector.Runtime) error {
baseDir := runtime.GetBaseDir()
files, err := os.ReadDir(baseDir)
if err != nil {
return errors.Wrapf(err, "failed to read directory %s", baseDir)
}
for _, file := range files {
if strings.HasPrefix(file.Name(), "upgrade.") {
filePath := path.Join(baseDir, file.Name())
if err := os.RemoveAll(filePath); err != nil && !os.IsNotExist(err) {
logger.Warnf("failed to delete %s: %v", filePath, err)
}
}
}
return nil
}
type DeleteWizardFiles struct {
common.KubeAction
}
@@ -452,14 +477,21 @@ func (a *DeletePodsUsingHostIP) Execute(runtime connector.Runtime) error {
if err != nil {
return errors.Wrap(err, "failed to get pods using host IP")
}
a.PipelineCache.Set(common.CacheCountPodsUsingHostIP, len(targetPods))
var waitRecreationPodsCount int
for _, pod := range targetPods {
logger.Infof("restarting pod %s/%s that's using host IP", pod.Namespace, pod.Name)
err = kubeClient.CoreV1().Pods(pod.Namespace).Delete(context.Background(), pod.Name, metav1.DeleteOptions{})
if err != nil && !kerrors.IsNotFound(err) {
return errors.Wrap(err, "failed to delete pod")
}
// pods not created by any owner resource
// may not be recreated immediately and should not be waited
if len(pod.OwnerReferences) > 0 {
waitRecreationPodsCount++
}
}
a.PipelineCache.Set(common.CacheCountPodsWaitForRecreation, waitRecreationPodsCount)
// try our best to wait for the pods to be actually deleted
// to avoid the next module getting the pods with a still running phase
@@ -478,7 +510,7 @@ type WaitForPodsUsingHostIPRecreate struct {
}
func (a *WaitForPodsUsingHostIPRecreate) Execute(runtime connector.Runtime) error {
count, ok := a.PipelineCache.GetMustInt(common.CacheCountPodsUsingHostIP)
count, ok := a.PipelineCache.GetMustInt(common.CacheCountPodsWaitForRecreation)
if !ok {
return errors.New("failed to get the count of pods using host IP")
}

View File

@@ -16,5 +16,5 @@ data:
kind: ConfigMap
metadata:
name: backup-config
namespace: os-system`),
namespace: os-framework`),
))

View File

@@ -18,5 +18,5 @@ data:
kind: ConfigMap
metadata:
name: default-reverse-proxy-config
namespace: os-system`),
namespace: os-network`),
))

View File

@@ -92,7 +92,7 @@ func (c *CreateBackupLocation) Execute(runtime connector.Runtime) error {
return errors.Wrap(errors.WithStack(err), "velero not found")
}
var ns = "os-system"
var ns = "os-framework"
var provider = "terminus"
var storage = "terminus-cloud"
@@ -122,7 +122,7 @@ func (i *InstallVeleroPlugin) Execute(runtime connector.Runtime) error {
return errors.Wrap(errors.WithStack(err), "velero not found")
}
var ns = "os-system"
var ns = "os-framework"
var cmd = fmt.Sprintf("%s plugin get -n %s |grep 'velero.io/terminus' |wc -l", velero, ns)
pluginCounts, _ := runtime.GetRunner().SudoCmd(cmd, false, true)
if counts := utils.ParseInt(pluginCounts); counts > 0 {
@@ -160,7 +160,7 @@ func (v *PatchVelero) Execute(runtime connector.Runtime) error {
return errors.Wrap(errors.WithStack(err), "kubectl not found")
}
var ns = "os-system"
var ns = "os-framework"
var patch = `[{"op":"replace","path":"/spec/template/spec/volumes","value": [{"name":"plugins","emptyDir":{}},{"name":"scratch","emptyDir":{}},{"name":"terminus-cloud","hostPath":{"path":"/olares/rootfs/k8s-backup", "type":"DirectoryOrCreate"}}]},{"op": "replace", "path": "/spec/template/spec/containers/0/volumeMounts", "value": [{"name":"plugins","mountPath":"/plugins"},{"name":"scratch","mountPath":"/scratch"},{"mountPath":"/data","name":"terminus-cloud"}]},{"op": "replace", "path": "/spec/template/spec/containers/0/securityContext", "value": {"privileged": true, "runAsNonRoot": false, "runAsUser": 0}}]`
if stdout, _ := runtime.GetRunner().SudoCmd(fmt.Sprintf("%s patch deploy velero -n %s --type='json' -p='%s'", kubectl, ns, patch), false, true); stdout != "" && !strings.Contains(stdout, "patched") {

View File

@@ -5,6 +5,7 @@ import (
"fmt"
"os"
"path"
"strings"
"time"
"github.com/beclab/Olares/cli/pkg/common"
@@ -186,16 +187,28 @@ func (u *UpgradeSystemComponents) Execute(runtime connector.Runtime) error {
if err != nil {
return fmt.Errorf("failed to get rest config: %s", err)
}
actionConfig, settings, err := utils.InitConfig(config, common.NamespaceOsSystem)
actionConfig, settings, err := utils.InitConfig(config, common.NamespaceOsPlatform)
if err != nil {
return err
}
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Minute)
defer cancel()
systemChartPath := path.Join(runtime.GetInstallerDir(), "wizard", "config", "system")
if err := utils.UpgradeCharts(ctx, actionConfig, settings, common.ChartNameSystem, systemChartPath, "", common.NamespaceOsSystem, nil, true); err != nil {
platformChartPath := path.Join(runtime.GetInstallerDir(), "wizard", "config", "os-platform")
if err := utils.UpgradeCharts(ctx, actionConfig, settings, common.ChartNameOSPlatform, platformChartPath, "", common.NamespaceOsPlatform, nil, true); err != nil {
return err
}
actionConfig, settings, err = utils.InitConfig(config, common.NamespaceOsFramework)
if err != nil {
return err
}
ctx, cancel = context.WithTimeout(context.Background(), 3*time.Minute)
defer cancel()
frameworkChartPath := path.Join(runtime.GetInstallerDir(), "wizard", "config", "os-framework")
if err := utils.UpgradeCharts(ctx, actionConfig, settings, common.ChartNameOSFramework, frameworkChartPath, "", common.NamespaceOsFramework, nil, true); err != nil {
return err
}
actionConfig, settings, err = utils.InitConfig(config, common.NamespaceDefault)
if err != nil {
return err
@@ -209,3 +222,67 @@ func (u *UpgradeSystemComponents) Execute(runtime connector.Runtime) error {
}
return nil
}
type UpdateSysctlReservedPorts struct {
common.KubeAction
}
func (u *UpdateSysctlReservedPorts) Execute(runtime connector.Runtime) error {
const sysctlFile = "/etc/sysctl.conf"
const reservedPortsKey = "net.ipv4.ip_local_reserved_ports"
const expectedValue = "30000-32767,46800-50000"
content, err := os.ReadFile(sysctlFile)
if err != nil {
return fmt.Errorf("failed to read sysctl.conf: %v", err)
}
lines := strings.Split(string(content), "\n")
var foundKey bool
var needUpdate bool
var updatedLines []string
for _, line := range lines {
trimmedLine := strings.TrimSpace(line)
if strings.HasPrefix(trimmedLine, reservedPortsKey) {
foundKey = true
parts := strings.SplitN(trimmedLine, "=", 2)
if len(parts) == 2 {
currentValue := strings.TrimSpace(parts[1])
if currentValue != expectedValue {
logger.Infof("updating %s from %s to %s", reservedPortsKey, currentValue, expectedValue)
updatedLines = append(updatedLines, fmt.Sprintf("%s=%s", reservedPortsKey, expectedValue))
needUpdate = true
} else {
updatedLines = append(updatedLines, line)
}
} else {
updatedLines = append(updatedLines, line)
}
} else {
updatedLines = append(updatedLines, line)
}
}
if !foundKey {
logger.Infof("key %s not found in sysctl.conf, adding it", reservedPortsKey)
updatedLines = append(updatedLines, fmt.Sprintf("%s=%s", reservedPortsKey, expectedValue))
needUpdate = true
}
if needUpdate {
updatedContent := strings.Join(updatedLines, "\n")
if err := os.WriteFile(sysctlFile, []byte(updatedContent), 0644); err != nil {
return fmt.Errorf("failed to write updated sysctl.conf: %v", err)
}
if _, err := runtime.GetRunner().SudoCmd("sysctl -p", false, false); err != nil {
return fmt.Errorf("failed to reload sysctl: %v", err)
}
logger.Infof("updated and reloaded sysctl configuration")
} else {
logger.Debugf("%s already has the expected value: %s", reservedPortsKey, expectedValue)
}
return nil
}

View File

@@ -18,7 +18,16 @@ type UpgradeModule struct {
}
var (
preTasks []*upgradeTask
preTasks = []*upgradeTask{
{
Task: &task.LocalTask{
Name: "UpdateSysctlReservedPorts",
Action: new(UpdateSysctlReservedPorts),
},
Current: &explicitVersionMatcher{max: semver.New(1, 12, 0, "20250701", "")},
Target: anyVersion,
},
}
coreTasks = []*upgradeTask{
{

View File

@@ -3,9 +3,9 @@ package dsa
import (
"fmt"
"olares-cli/pkg/web5/crypto/dsa/ecdsa"
"olares-cli/pkg/web5/crypto/dsa/eddsa"
"olares-cli/pkg/web5/jwk"
"github.com/beclab/Olares/cli/pkg/web5/crypto/dsa/ecdsa"
"github.com/beclab/Olares/cli/pkg/web5/crypto/dsa/eddsa"
"github.com/beclab/Olares/cli/pkg/web5/jwk"
)
const (

View File

@@ -4,10 +4,10 @@ import (
"encoding/hex"
"testing"
"olares-cli/pkg/web5/crypto/dsa"
"olares-cli/pkg/web5/crypto/dsa/ecdsa"
"olares-cli/pkg/web5/crypto/dsa/eddsa"
"olares-cli/pkg/web5/jwk"
"github.com/beclab/Olares/cli/pkg/web5/crypto/dsa"
"github.com/beclab/Olares/cli/pkg/web5/crypto/dsa/ecdsa"
"github.com/beclab/Olares/cli/pkg/web5/crypto/dsa/eddsa"
"github.com/beclab/Olares/cli/pkg/web5/jwk"
"github.com/alecthomas/assert/v2"
)

View File

@@ -4,7 +4,7 @@ import (
"errors"
"fmt"
"olares-cli/pkg/web5/jwk"
"github.com/beclab/Olares/cli/pkg/web5/jwk"
)
const (

View File

@@ -6,7 +6,7 @@ import (
"errors"
"fmt"
"olares-cli/pkg/web5/jwk"
"github.com/beclab/Olares/cli/pkg/web5/jwk"
_secp256k1 "github.com/decred/dcrd/dcrec/secp256k1/v4"
"github.com/decred/dcrd/dcrec/secp256k1/v4/ecdsa"

View File

@@ -4,8 +4,8 @@ import (
"encoding/hex"
"testing"
"olares-cli/pkg/web5/crypto/dsa/ecdsa"
"olares-cli/pkg/web5/jwk"
"github.com/beclab/Olares/cli/pkg/web5/crypto/dsa/ecdsa"
"github.com/beclab/Olares/cli/pkg/web5/jwk"
"github.com/alecthomas/assert/v2"
)

View File

@@ -7,7 +7,7 @@ import (
"errors"
"fmt"
"olares-cli/pkg/web5/jwk"
"github.com/beclab/Olares/cli/pkg/web5/jwk"
)
const (

View File

@@ -5,8 +5,8 @@ import (
"encoding/hex"
"testing"
"olares-cli/pkg/web5/crypto/dsa/eddsa"
"olares-cli/pkg/web5/jwk"
"github.com/beclab/Olares/cli/pkg/web5/crypto/dsa/eddsa"
"github.com/beclab/Olares/cli/pkg/web5/jwk"
"github.com/alecthomas/assert/v2"
)

View File

@@ -6,7 +6,7 @@ import (
"errors"
"fmt"
"olares-cli/pkg/web5/jwk"
"github.com/beclab/Olares/cli/pkg/web5/jwk"
)
const (

View File

@@ -4,7 +4,7 @@ import (
"encoding/hex"
"testing"
"olares-cli/pkg/web5/crypto"
"github.com/beclab/Olares/cli/pkg/web5/crypto"
"github.com/alecthomas/assert/v2"
)

View File

@@ -3,8 +3,8 @@ package crypto
import (
"fmt"
"olares-cli/pkg/web5/crypto/dsa"
"olares-cli/pkg/web5/jwk"
"github.com/beclab/Olares/cli/pkg/web5/crypto/dsa"
"github.com/beclab/Olares/cli/pkg/web5/jwk"
)
// KeyManager is an abstraction that can be leveraged to manage/use keys (create, sign etc) as desired per the given use case
@@ -44,7 +44,7 @@ func NewLocalKeyManager() *LocalKeyManager {
// GeneratePrivateKey generates a new private key using the algorithm provided,
// stores it in the key store and returns the key id
// Supported algorithms are available in [olares/olares-cli/pkg/web5/crypto/dsa.AlgorithmID]
// Supported algorithms are available in [olares/github.com/beclab/Olares/cli/pkg/web5/crypto/dsa.AlgorithmID]
func (k *LocalKeyManager) GeneratePrivateKey(algorithmID string) (string, error) {
var keyAlias string

View File

@@ -3,8 +3,8 @@ package crypto_test
import (
"testing"
"olares-cli/pkg/web5/crypto"
"olares-cli/pkg/web5/crypto/dsa"
"github.com/beclab/Olares/cli/pkg/web5/crypto"
"github.com/beclab/Olares/cli/pkg/web5/crypto/dsa"
"github.com/alecthomas/assert/v2"
)

View File

@@ -3,9 +3,9 @@ package did
import (
"fmt"
"olares-cli/pkg/web5/crypto"
"olares-cli/pkg/web5/dids/didcore"
"olares-cli/pkg/web5/jwk"
"github.com/beclab/Olares/cli/pkg/web5/crypto"
"github.com/beclab/Olares/cli/pkg/web5/dids/didcore"
"github.com/beclab/Olares/cli/pkg/web5/jwk"
)
// BearerDID is a composite type that combines a DID with a KeyManager containing keys

View File

@@ -3,12 +3,12 @@ package did_test
import (
"testing"
"olares-cli/pkg/web5/crypto/dsa"
"olares-cli/pkg/web5/dids/did"
"olares-cli/pkg/web5/dids/didcore"
"olares-cli/pkg/web5/dids/didkey"
"olares-cli/pkg/web5/jwk"
"olares-cli/pkg/web5/jws"
"github.com/beclab/Olares/cli/pkg/web5/crypto/dsa"
"github.com/beclab/Olares/cli/pkg/web5/dids/did"
"github.com/beclab/Olares/cli/pkg/web5/dids/didcore"
"github.com/beclab/Olares/cli/pkg/web5/dids/didkey"
"github.com/beclab/Olares/cli/pkg/web5/jwk"
"github.com/beclab/Olares/cli/pkg/web5/jws"
"github.com/alecthomas/assert/v2"
)

View File

@@ -3,7 +3,7 @@ package did_test
import (
"testing"
"olares-cli/pkg/web5/dids/did"
"github.com/beclab/Olares/cli/pkg/web5/dids/did"
"github.com/alecthomas/assert/v2"
)

View File

@@ -1,8 +1,8 @@
package did
import (
"olares-cli/pkg/web5/dids/didcore"
"olares-cli/pkg/web5/jwk"
"github.com/beclab/Olares/cli/pkg/web5/dids/didcore"
"github.com/beclab/Olares/cli/pkg/web5/jwk"
)
// PortableDID is a serializable BearerDID. VerificationMethod contains the private key

View File

@@ -4,7 +4,7 @@ import (
"errors"
"fmt"
"olares-cli/pkg/web5/jwk"
"github.com/beclab/Olares/cli/pkg/web5/jwk"
)
const (

View File

@@ -3,7 +3,7 @@ package didcore_test
import (
"testing"
"olares-cli/pkg/web5/dids/didcore"
"github.com/beclab/Olares/cli/pkg/web5/dids/didcore"
"github.com/alecthomas/assert/v2"
)

View File

@@ -4,11 +4,11 @@ import (
"context"
"fmt"
"olares-cli/pkg/web5/crypto"
"olares-cli/pkg/web5/crypto/dsa"
"olares-cli/pkg/web5/dids/did"
"olares-cli/pkg/web5/dids/didcore"
"olares-cli/pkg/web5/jwk"
"github.com/beclab/Olares/cli/pkg/web5/crypto"
"github.com/beclab/Olares/cli/pkg/web5/crypto/dsa"
"github.com/beclab/Olares/cli/pkg/web5/dids/did"
"github.com/beclab/Olares/cli/pkg/web5/dids/didcore"
"github.com/beclab/Olares/cli/pkg/web5/jwk"
)
// createOptions is a struct that contains all options that can be passed to [Create]

View File

@@ -4,7 +4,7 @@ import (
"encoding/base64"
"fmt"
"olares-cli/pkg/web5/jwk"
"github.com/beclab/Olares/cli/pkg/web5/jwk"
"github.com/mr-tron/base58"
"github.com/multiformats/go-varint"

Some files were not shown because too many files have changed in this diff Show More