Compare commits

...

64 Commits

Author SHA1 Message Date
dkeven
6da48e9679 feat(bfl): reuse owner's proxy config when activating sub-accounts 2025-12-23 20:04:32 +08:00
dkeven
fe98eaf1ba bfl: update image version to v0.4.36 2025-12-16 21:03:24 +08:00
dkeven
2a54db6af3 chore(bfl): remove some unused API handlers 2025-12-16 20:59:55 +08:00
wiy
4ed649bff7 feat(olares-app): update olares-app version to v1.6.23 (#2244)
* feat: update frontend system and user-service version

* feat: update vault-server version to v1.6.23

---------

Co-authored-by: icebergtsn <zyh2433219116@gmail.com>
2025-12-15 23:50:09 +08:00
hysyeah
e383c22fe5 app-service: fix v2 app stop (#2243)
* feat: v2 stop support all to stop server

* fix: app clone failed

* fix: envoy inbound skip qemu source ip (#2208)

fix: skip qemu source ip

* app-service: update owner field to use app owner from app manager

* app-service: update owner field to use app owner from app manager

* fix: argo resource namespace validate

* Revert "fix: app clone failed"

This reverts commit a8a14ab9d6.

* app-service: update app-service image tag

* fix: v2 app stop

* update app-service image tag

* feat: upgrade v0.0.90 (#2227)

Co-authored-by: ubuntu <you@example.com>

* feat(olares-app): update olares app version to v1.6.22 (#2232)

* feat(olares-app): update olares app version to v1.6.22

* feat: create empty file for uploading

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>

* chore(ci): only scan for image manifest under .olares (#2234)

---------

Co-authored-by: eball <liuy102@hotmail.com>
Co-authored-by: salt <bleachzou2@163.com>
Co-authored-by: ubuntu <you@example.com>
Co-authored-by: wiy <guojianmin@bytetrade.io>
Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
Co-authored-by: dkeven <82354774+dkeven@users.noreply.github.com>
2025-12-15 23:49:25 +08:00
dkeven
ce15e2ce00 chore(cli): remove unnecessary files and code related to kubesphere (#2242) 2025-12-15 23:48:40 +08:00
eball
957dff10a6 cli: refactor timestamp check for clarity and correctness (#2241)
* cli: refactor timestamp check for clarity and correctness

* fix: improve timestamp validation logic in CheckJWS function
2025-12-15 23:47:57 +08:00
salt
da35df9280 feat: upgrade to v0.0.92 (#2239)
Co-authored-by: ubuntu <you@example.com>
2025-12-15 23:47:31 +08:00
wiy
14edf88acb fix(notifications-api): payment template id error (#2238) 2025-12-15 23:46:55 +08:00
dkeven
939a9b5ba3 refactor: merge module kubesphere into main repo (#2237) 2025-12-15 23:45:34 +08:00
Yajing
3bd0705742 docs: add redirects and refactor studio docs (#2188) 2025-12-15 21:59:24 +08:00
yajing wang
6662923b87 add redirects and address comments 2025-12-15 21:55:34 +08:00
dkeven
f39fec6c68 chore(ci): only scan for image manifest under .olares (#2234) 2025-12-15 21:41:52 +08:00
yajing wang
e1362a43f7 add screenshots and address comments 2025-12-15 21:06:52 +08:00
wiy
a7c611571f feat(olares-app): update olares app version to v1.6.22 (#2232)
* feat(olares-app): update olares app version to v1.6.22

* feat: create empty file for uploading

---------

Co-authored-by: lovehunter9 <wangrx07@aliyun.com>
2025-12-12 23:52:46 +08:00
hysyeah
f0f2d4798c app-service: argo resource namespace validate (#2230)
* feat: v2 stop support all to stop server

* fix: app clone failed

* fix: envoy inbound skip qemu source ip (#2208)

fix: skip qemu source ip

* app-service: update owner field to use app owner from app manager

* app-service: update owner field to use app owner from app manager

* fix: argo resource namespace validate

* Revert "fix: app clone failed"

This reverts commit a8a14ab9d6.

* app-service: update app-service image tag

---------

Co-authored-by: eball <liuy102@hotmail.com>
2025-12-12 23:52:10 +08:00
salt
9d6fd7a276 feat: upgrade v0.0.90 (#2227)
Co-authored-by: ubuntu <you@example.com>
2025-12-12 23:51:36 +08:00
eball
02e45a7fb3 daemon: fix intranet server restarting bug (#2229)
* daemon: fix intranet server restarting bug

* fix(watcher): correct condition for verifying AdGuard DNS pod health
2025-12-12 21:28:19 +08:00
aby913
57a003efb9 bfl: fix delete custom domain url (#2220)
* bfl: fix delete custom domain url (#2218)

* bfl: fix delete custom domain url

* cleanup(bfl): remove binary outputs

---------

Co-authored-by: dkeven <dkvvven@gmail.com>
2025-12-12 11:57:03 +08:00
berg
aca446a05a system frontend, user service, market backend: fix some bugs and update payment flow (#2223)
feat: update system-frontend, user-service and market backend version
2025-12-12 11:56:36 +08:00
dkeven
b1cb265654 fix(ci): filter out dummy image names when scanning (#2226) 2025-12-12 11:18:18 +08:00
eball
60f3976da9 tapr: upgrade pod template and image for PGCluster reconciliation (#2219)
* tapr: upgrade pod template and image for PGCluster reconciliation (#2213)

* tapr: upgrade pod template and image for PGCluster reconciliation

* fix(ci): specify working directory in github action for tapr (#2215)

---------

Co-authored-by: dkeven <82354774+dkeven@users.noreply.github.com>

* tapr: upgrade pod template and image for PGCluster reconciliation

---------

Co-authored-by: dkeven <82354774+dkeven@users.noreply.github.com>
2025-12-11 21:43:53 +08:00
eball
b5f175dcb8 app-service: update owner field to use app owner from app manager (#2217)
* feat: v2 stop support all to stop server

* fix: app clone failed

* fix: envoy inbound skip qemu source ip (#2208)

fix: skip qemu source ip

* app-service: update owner field to use app owner from app manager

* app-service: update owner field to use app owner from app manager

---------

Co-authored-by: hys <hysyeah@gmail.com>
2025-12-11 21:43:34 +08:00
dkeven
3b0cc74984 refactor: integrate module kube-state-metrics into main repo (#2214) 2025-12-11 21:14:55 +08:00
dkeven
d3b2dc3029 refactor: integrate module integration into main repo (#2212) 2025-12-11 21:14:10 +08:00
dkeven
019e1948ce refactor: integrate module backup server into main repo (#2211) 2025-12-11 21:13:30 +08:00
dkeven
2f87901cf8 feat(cli): add command to forcefully reset password (#2202)
* feat(cli): add command to forcefully reset password

* feat(deploy): update authelia image to version 0.2.43 and add verbosity to system provider logs

* lldap image tag

* fix: update lldap to 0.0.16

---------

Co-authored-by: eball <liuy102@hotmail.com>
Co-authored-by: hys <hysyeah@gmail.com>
2025-12-11 21:12:47 +08:00
dkeven
0b2c5d3835 refactor: integrate module systemserver into main repo (#2210) 2025-12-11 21:12:12 +08:00
dkeven
0eeeb99620 refactor: integrate module osnode-init into main repo (#2207) 2025-12-11 21:07:42 +08:00
dkeven
e73480b353 fix(daemon): merge mirror endpoints into the main container repo (#2203) 2025-12-11 20:50:04 +08:00
dkeven
2ad44d6617 refactor: integrate module L4-BFL-proxy into main repo (#2205) 2025-12-11 20:49:29 +08:00
salt
93385b655d feat: add /file/extract-fail rest api (#2199)
* feat: add /file/extract-fail rest api

* fix: Put the certificate generation code into the search3_validation.yaml file.

* fix: If any of certCrtEnc, certKeyEnc, or caCertEnc is empty, regenerate all of them

---------

Co-authored-by: ubuntu <you@example.com>
2025-12-11 20:38:40 +08:00
dkeven
60d37998af refactor: integrate module BFL into main repo (#2206) 2025-12-11 20:35:19 +08:00
dkeven
4cf740b4f8 fix(ci): specify working directory in github action for tapr (#2215) 2025-12-11 19:59:48 +08:00
dkeven
ba8c7faa7d refactor: integrate module tapr into main repo (#2209) 2025-12-11 19:11:04 +08:00
yajing wang
6ec7f214cb add zh-cn version 2025-12-11 18:06:06 +08:00
berg
8e1e71fad3 system-frontend, files-server, market-backend, user-service: Add backup size push & upgrade progress, fix payment bug, optimize CS app sync, move uninstall popup, add disk check and no-cache (#2197)
* feat(olares-app): update olares-app version to v1.6.20

* files: upload check disk space, edit files set no-cache

---------

Co-authored-by: qq815776412 <815776412@qq.com>
Co-authored-by: aby913 <aby913@163.com>
2025-12-11 00:15:08 +08:00
hysyeah
3007c78926 feat(app-service): v2 stop support all to stop server (#2196)
* feat: v2 stop support all to stop server

* update appservice image tag
2025-12-11 00:14:28 +08:00
salt
b0787c19a1 fix: validation certification error (#2194)
* fix: fix monitor setting account bug

* feat: submit ca certificate

* fix: validation certification error

---------

Co-authored-by: ubuntu <you@example.com>
2025-12-11 00:13:43 +08:00
eball
1a485ca959 daemon: reset local domain when ip changing (#2192) 2025-12-11 00:13:05 +08:00
hysyeah
ce8c82f9b5 fix(appservice): validate env regex using extended lib (#2193) 2025-12-10 22:00:35 +08:00
Yajing
3ae6852c81 docs: update ComfyUI Launcher tutorial (#2142) 2025-12-10 20:04:40 +08:00
Meow33
380cb98b66 docs: solve formatting issues 2025-12-10 20:00:56 +08:00
Meow33
77d35d8890 docs: refine table width and path format 2025-12-10 19:52:20 +08:00
Meow33
849c098696 Apply suggestion from @fnalways
Co-authored-by: Yajing <110797546+fnalways@users.noreply.github.com>
2025-12-10 19:36:55 +08:00
Meow33
42f5f3108b Apply suggestions from code review
Co-authored-by: Yajing <110797546+fnalways@users.noreply.github.com>
2025-12-10 19:35:57 +08:00
dkeven
1f7be15e51 appservice: update to v0.4.58 2025-12-10 15:49:53 +08:00
dkeven
bc0da70a85 fix: validate env regex using extended lib (#2190) 2025-12-10 15:40:08 +08:00
Meow33
6898ebb3a2 docs: update based on suggestions 2025-12-10 14:54:51 +08:00
salt
63f302cd82 fix: search3validation certificate error (#2191)
* fix: fix monitor setting account bug

* feat: submit ca certificate

---------

Co-authored-by: ubuntu <you@example.com>
2025-12-10 14:39:17 +08:00
berg
08b7cb872e backup, search, system frontend: fix some bugs (#2187)
* backup: add backup total size

* feat(olares-app): update settings gpu edit

* feat: update system frontend version to v1.6.19

---------

Co-authored-by: aby913 <aby913@163.com>
Co-authored-by: qq815776412 <815776412@qq.com>
2025-12-10 14:38:24 +08:00
yajing wang
543328fa6e docs: add redirects and refactor studio docs 2025-12-10 00:59:54 +08:00
salt
3334bc69e4 fix: fix monitor setting account bug (#2186)
Co-authored-by: ubuntu <you@example.com>
2025-12-09 23:35:44 +08:00
hysyeah
4d061544a6 feat: add argo workflow to os-platform (#2182)
* feat: add argo workflow to os-platform

* fix: argo pg database

* fix: add argo crd
2025-12-09 23:35:13 +08:00
dkeven
5e58695c75 fix(cli): update initramfs after disabling nouveau kernel module (#2180) 2025-12-09 23:34:39 +08:00
eball
6ebb19db03 tapr: fix reconciling kvrocks creating event bug (#2179)
* tapr: fix reconciling kvrocks creating event bug

* Update middleware-operator image version to 0.2.28
2025-12-09 23:34:12 +08:00
eball
a08fd3b28c opa: add check for docker.io image registry in container policy (#2178) 2025-12-09 23:33:54 +08:00
simon
abbecf8e12 download-server: fix download provider cluster role (#2176)
download provider
2025-12-09 23:33:39 +08:00
hysyeah
e150b9418b app-service: sync module code (#2183)
* fix(app-service): check for nil annotations before assignment (#2163)

fix: check for nil annotations before assignment

* fix: add open telemetry netpol (#2175)

---------

Co-authored-by: dkeven <82354774+dkeven@users.noreply.github.com>
2025-12-09 21:54:37 +08:00
hysyeah
1e5176f17b app-service: fix open telemetry issue (#2177) 2025-12-09 21:42:15 +08:00
Meow33
97c12b0b21 docs: update based on suggestions 2025-12-03 17:58:41 +08:00
Meow33
9746ffdc33 Apply suggestions from code review
Co-authored-by: Yajing <110797546+fnalways@users.noreply.github.com>
2025-12-03 17:48:08 +08:00
Meow33
faa7638353 docs: update the structure and content 2025-12-02 21:27:19 +08:00
Meow33
fc57d0b9f1 docs: update ComfyUI Launcher tutorial 2025-12-02 17:54:35 +08:00
1398 changed files with 178353 additions and 3589 deletions

View File

@@ -0,0 +1,31 @@
name: Backup Server Build test
on:
push:
branches:
- "module-backup"
paths:
- 'framework/backup-server/**'
- '!framework/backup-server/.olares/**'
- '!framework/backup-server/README.md'
pull_request:
branches:
- "module-backup"
paths:
- 'framework/backup-server/**'
- '!framework/backup-server/.olares/**'
- '!framework/backup-server/README.md'
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: '1.21.10'
- name: Run Build
run: |
make all
working-directory: framework/backup-server

View File

@@ -0,0 +1,36 @@
name: Publish Backup Server to Dockerhub
on:
workflow_dispatch:
inputs:
tags:
description: 'Release Tags'
jobs:
publish_dockerhub:
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Build and push Docker image
uses: docker/build-push-action@v3
with:
push: true
tags: beclab/backup-server:v${{ github.event.inputs.tags }}
file: framework/backup-server/Dockerfile
context: framework/backup-server
platforms: linux/amd64,linux/arm64

View File

@@ -0,0 +1,36 @@
name: Publish Sidecar Backup Sync to Dockerhub
on:
workflow_dispatch:
inputs:
tags:
description: 'Release Tags'
jobs:
publish_dockerhub:
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Build and push Docker image
uses: docker/build-push-action@v3
with:
push: true
tags: beclab/sidecar-backup-sync:v${{ github.event.inputs.tags }}
file: framework/backup-server/Dockerfile.sidecar
context: framework/backup-server
platforms: linux/amd64,linux/arm64

View File

@@ -0,0 +1,43 @@
name: BFL Build test
on:
push:
branches: [ "module-bfl" ]
paths:
- 'framework/bfl/**'
- '!framework/bfl/.olares/**'
- '!framework/bfl/README.md'
pull_request:
branches: [ "module-bfl" ]
paths:
- 'framework/bfl/**'
- '!framework/bfl/.olares/**'
- '!framework/bfl/README.md'
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: '1.22.1'
- name: Run Build
working-directory: framework/bfl
run: |
ksDir="../../kubesphere-ext"
version="v3.3.0-ext"
if [ -d "$ksDir" ]; then
pushd "${ksDir}/"
branch=$(git rev-parse --abbrev-ref HEAD|awk -F / '{print $2}')
if [ x"$branch" != x"$version" ]; then
git checkout $version
fi
popd &>/dev/null
else
git clone https://github.com/beclab/kubesphere-ext.git "${ksDir}"
fi
make all

View File

@@ -0,0 +1,36 @@
name: Publish BFL-API to Dockerhub
on:
workflow_dispatch:
inputs:
tags:
description: 'Release Tags'
jobs:
publish_dockerhub:
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Build and push Docker image
uses: docker/build-push-action@v3
with:
push: true
tags: beclab/bfl:${{ github.event.inputs.tags }}
file: framework/bfl/Dockerfile.api
context: framework/bfl
platforms: linux/amd64,linux/arm64

View File

@@ -0,0 +1,35 @@
name: Publish BFL-frpc to Dockerhub
on:
workflow_dispatch:
inputs:
tags:
description: 'Release Tags'
jobs:
publish_dockerhub:
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Build bfl-frpc and push Docker image
uses: docker/build-push-action@v3
with:
push: true
tags: beclab/frpc:${{ github.event.inputs.tags }}
file: framework/bfl/Dockerfile.frpc
context: framework/bfl
platforms: linux/amd64,linux/arm64

View File

@@ -0,0 +1,35 @@
name: Publish BFL-ingress to Dockerhub
on:
workflow_dispatch:
inputs:
tags:
description: 'Release Tags'
jobs:
publish_dockerhub:
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Build bfl-ingress and push Docker image
uses: docker/build-push-action@v3
with:
push: true
tags: beclab/bfl-ingress:${{ github.event.inputs.tags }}
file: framework/bfl/Dockerfile.ingress
context: framework/bfl
platforms: linux/amd64,linux/arm64

View File

@@ -0,0 +1,58 @@
name: Publish Integration Server to Dockerhub
on:
workflow_dispatch:
inputs:
tags:
description: "Release Tags"
jobs:
publish_dockerhub:
runs-on: ubuntu-latest
steps:
- name: PR Conventional Commit Validation
uses: ytanikin/PRConventionalCommits@1.1.0
if: github.event_name == 'pull_request' || github.event_name == 'pull_request_target'
with:
task_types: '["feat","fix","docs","test","ci","refactor","perf","chore","revert","style"]'
add_label: "true"
- name: Check out the repo
uses: actions/checkout@v3
with:
submodules: recursive
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
with:
image: tonistiigi/binfmt:qemu-v8.1.5
cache-image: false
platforms: arm64
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- uses: actions/setup-go@v2
with:
go-version: 1.23.3
- name: Login to GitHub Container Registry
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: get latest tag
uses: "WyriHaximus/github-action-get-previous-tag@v1"
id: get-latest-tag
with:
fallback: latest
- name: Build and push
uses: docker/build-push-action@v2
with:
file: framework/integration/Dockerfile
push: true
tags: beclab/integration-server:${{ github.event.inputs.tags }}
platforms: linux/amd64,linux/arm64
context: framework/integration

View File

@@ -0,0 +1,36 @@
name: Publish Kube State Metrics to Dockerhub
on:
workflow_dispatch:
inputs:
tags:
description: 'Release Tags'
jobs:
update_dockerhub:
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Build and Push image
uses: docker/build-push-action@v3
with:
push: true
tags: beclab/kube-state-metrics:${{ github.event.inputs.tags }}
file: framework/kube-state-metrics/Dockerfile
platforms: linux/amd64,linux/arm64
context: framework/kube-state-metrics

View File

@@ -0,0 +1,29 @@
name: Kubesphere Build Test
on:
push:
branches:
- "module-kubesphere"
paths:
- 'infrastructure/kubesphere/**'
- '!infrastructure/kubesphere/.olares/**'
- '!infrastructure/kubesphere/README.md'
pull_request:
branches:
- "module-kubesphere"
paths:
- 'infrastructure/kubesphere/**'
- '!infrastructure/kubesphere/.olares/**'
- '!infrastructure/kubesphere/README.md'
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: '1.24'
- run: make binary
working-directory: infrastructure/kubesphere

View File

@@ -0,0 +1,36 @@
name: Publish Kubesphere to Dockerhub
on:
workflow_dispatch:
inputs:
tags:
description: 'Release Tags'
jobs:
publish_dockerhub:
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build and push Docker image
uses: docker/build-push-action@v3
with:
push: true
tags: beclab/ks-apiserver:${{ github.event.inputs.tags }}
file: infrastructure/kubesphere/build/ks-apiserver/Dockerfile
context: infrastructure/kubesphere
platforms: linux/amd64,linux/arm64

View File

@@ -0,0 +1,47 @@
name: L4-BFL-Proxy Build test
on:
push:
branches: [ "module-l4" ]
paths:
- 'framework/l4-bfl-proxy/**'
- '!framework/l4-bfl-proxy/.olares/**'
- '!framework/l4-bfl-proxy/README.md'
pull_request:
branches: [ "module-l4" ]
paths:
- 'framework/l4-bfl-proxy/**'
- '!framework/l4-bfl-proxy/.olares/**'
- '!framework/l4-bfl-proxy/README.md'
jobs:
build:
runs-on: ubuntu-latest
# runs-on: self-hosted
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: '1.18.2'
- name: Run Build
working-directory: framework/l4-bfl-proxy
run: |
ksDir="../../kubesphere"
tag="v3.3.0"
if [ -d "$ksDir" ]; then
pushd "${ksDir}/"
branch=$(git rev-parse --abbrev-ref HEAD|awk -F / '{print $2}')
if [ x"$branch" != x"$tag" ]; then
git checkout -b $tag
fi
popd &>/dev/null
else
git clone https://github.com/kubesphere/kubesphere.git "${ksDir}"
pushd "${ksDir}/"
git checkout -b $tag
popd &>/dev/null
fi
make all

View File

@@ -0,0 +1,67 @@
name: Publish L4 openresty-base to Dockerhub
on:
workflow_dispatch:
inputs:
tags:
description: 'Release Tags'
jobs:
publish_dockerhub_amd64:
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Build openresty and push Docker image
uses: docker/build-push-action@v3
with:
push: true
tags: bytetrade/openresty:base-${{ github.event.inputs.tags }}-amd64
file: framework/l4-bfl-proxy/Dockerfile.openresty
platforms: linux/amd64
context: framework/l4-bfl-proxy
publish_dockerhub_arm64:
runs-on: self-hosted
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Build nginx-lua and push Docker image
uses: docker/build-push-action@v3
with:
push: true
tags: bytetrade/openresty:base-${{ github.event.inputs.tags }}-arm64
file: framework/l4-bfl-proxy/Dockerfile.openresty
platforms: linux/arm64
context: framework/l4-bfl-proxy
publish_manifest:
needs:
- publish_dockerhub_amd64
- publish_dockerhub_arm64
runs-on: ubuntu-latest
steps:
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Push manifest
run: |
docker manifest create bytetrade/openresty:base-${{ github.event.inputs.tags }} --amend bytetrade/openresty:base-${{ github.event.inputs.tags }}-amd64 --amend bytetrade/openresty:base-${{ github.event.inputs.tags }}-arm64
docker manifest push bytetrade/openresty:base-${{ github.event.inputs.tags }}

View File

@@ -0,0 +1,35 @@
name: Publish L4-BFL-Proxy to Dockerhub
on:
workflow_dispatch:
inputs:
tags:
description: 'Release Tags'
jobs:
publish_dockerhub:
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Build l4-bfl-proxy and push Docker image
uses: docker/build-push-action@v3
with:
push: true
tags: beclab/l4-bfl-proxy:${{ github.event.inputs.tags }}
file: framework/l4-bfl-proxy/Dockerfile
platforms: linux/amd64,linux/arm64
context: framework/l4-bfl-proxy

View File

@@ -0,0 +1,67 @@
name: Publish L4 nginx-lua to Dockerhub
on:
workflow_dispatch:
inputs:
tags:
description: 'Release Tags'
jobs:
publish_dockerhub_amd64:
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Build nginx-lua and push Docker image
uses: docker/build-push-action@v3
with:
push: true
tags: bytetrade/openresty:${{ github.event.inputs.tags }}-amd64
file: framework/l4-bfl-proxy/Dockerfile.nginx
platforms: linux/amd64
context: framework/l4-bfl-proxy
publish_dockerhub_arm64:
runs-on: self-hosted
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Build nginx-lua and push Docker image
uses: docker/build-push-action@v3
with:
push: true
tags: bytetrade/openresty:${{ github.event.inputs.tags }}-arm64
file: framework/l4-bfl-proxy/Dockerfile.nginx
platforms: linux/arm64
context: framework/l4-bfl-proxy
publish_manifest:
needs:
- publish_dockerhub_amd64
- publish_dockerhub_arm64
runs-on: ubuntu-latest
steps:
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Push manifest
run: |
docker manifest create bytetrade/openresty:${{ github.event.inputs.tags }} --amend bytetrade/openresty:${{ github.event.inputs.tags }}-amd64 --amend bytetrade/openresty:${{ github.event.inputs.tags }}-arm64
docker manifest push bytetrade/openresty:${{ github.event.inputs.tags }}

View File

@@ -0,0 +1,29 @@
name: OSNode-Init Build test
on:
push:
branches: [ "module-nodeinit" ]
paths:
- 'framework/osnode-init/**'
- '!framework/osnode-init/.olares/**'
- '!framework/osnode-init/README.md'
pull_request:
branches: [ "module-nodeinit" ]
paths:
- 'framework/osnode-init/**'
- '!framework/osnode-init/.olares/**'
- '!framework/osnode-init/README.md'
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: '1.24.6'
- name: Run Build
working-directory: framework/osnode-init
run: |
make all

View File

@@ -0,0 +1,42 @@
name: Publish OSNode-Init to Dockerhub
on:
workflow_dispatch:
inputs:
tags:
description: 'Release Tags'
jobs:
publish_dockerhub:
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.24.6'
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Build and push Docker image
uses: docker/build-push-action@v3
with:
push: true
tags: beclab/osnode-init:v${{ github.event.inputs.tags }}
file: framework/osnode-init/Dockerfile
context: framework/osnode-init
platforms: linux/amd64, linux/arm64

View File

@@ -0,0 +1,31 @@
name: SystemServer Build test
on:
push:
branches:
- "module-systemserver"
paths:
- 'framework/systemserver/**'
- '!framework/systemserver/.olares/**'
- '!framework/systemserver/README.md'
pull_request:
branches:
- "module-systemserver"
paths:
- 'framework/systemserver/**'
- '!framework/systemserver/.olares/**'
- '!framework/systemserver/README.md'
jobs:
build0-main:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: '1.22.6'
- run: |
git clone https://github.com/kubernetes/code-generator.git ../code-generator
cd ../code-generator
git checkout -b release-1.27
cd -
make system-server
working-directory: framework/system-server

View File

@@ -0,0 +1,37 @@
name: Publish SystemServer to Dockerhub
on:
workflow_dispatch:
inputs:
tags:
description: 'Release Tags'
jobs:
update_dockerhub:
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Build and push Docker image
uses: docker/build-push-action@v3
with:
push: true
tags: beclab/system-server:${{ github.event.inputs.tags }}
context: framework/system-server
file: Dockerfile
platforms: linux/amd64,linux/arm64

View File

@@ -0,0 +1,37 @@
name: Publish SystemServer Provider Proxy to Dockerhub
on:
workflow_dispatch:
inputs:
tags:
description: 'Release Tags'
jobs:
update_dockerhub:
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Build and push Docker image
uses: docker/build-push-action@v3
with:
push: true
tags: beclab/provider-proxy:${{ github.event.inputs.tags }}
file: framework/system-server/Dockerfile.provider
context: framework/system-server
platforms: linux/amd64,linux/arm64

View File

@@ -0,0 +1,28 @@
name: TAPR Build test
on:
push:
branches:
- "module-tapr"
paths:
- 'platform/tapr/**'
- '!platform/tapr/.olares/**'
- '!platform/tapr/README.md'
pull_request:
branches:
- "module-tapr"
paths:
- 'platform/tapr/**'
- '!platform/tapr/.olares/**'
- '!platform/tapr/README.md'
jobs:
build0-main:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: '1.23.3'
- working-directory: platform/tapr
run: |
make build-uploader build-vault build-middleware

View File

@@ -0,0 +1,37 @@
name: Publish TAPR citus to Dockerhub
on:
workflow_dispatch:
inputs:
tags:
description: 'Release Tags'
jobs:
update_dockerhub:
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Build and push Docker image
uses: docker/build-push-action@v3
with:
push: true
tags: beclab/citus:${{ github.event.inputs.tags }}
file: platform/tapr/docker/citus/Dockerfile
platforms: linux/amd64, linux/arm64
context: platform/tapr

View File

@@ -0,0 +1,37 @@
name: Publish TAPR image-uploader to Dockerhub
on:
workflow_dispatch:
inputs:
tags:
description: 'Release Tags'
jobs:
update_dockerhub:
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Build and push Docker image
uses: docker/build-push-action@v3
with:
push: true
tags: beclab/images-uploader:${{ github.event.inputs.tags }}
file: platform/tapr/docker/uploader/Dockerfile
context: platform/tapr
platforms: linux/amd64, linux/arm64

View File

@@ -0,0 +1,62 @@
name: Publish TAPR middleware-operator to Dockerhub
on:
workflow_dispatch:
inputs:
tags:
description: 'Release Tags'
jobs:
publish_dockerhub_amd64:
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Build and push amd64 Docker image
uses: docker/build-push-action@v3
with:
push: true
tags: beclab/middleware-operator:${{ github.event.inputs.tags }}-amd64
file: platform/tapr/docker/middleware/Dockerfile
context: platform/tapr
platforms: linux/amd64
publish_dockerhub_arm64:
runs-on: self-hosted
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Build and push arm64 Docker image
uses: docker/build-push-action@v3
with:
push: true
tags: beclab/middleware-operator:${{ github.event.inputs.tags }}-arm64
file: platform/tapr/docker/middleware/Dockerfile
context: platform/tapr
platforms: linux/arm64
publish_manifest:
needs:
- publish_dockerhub_amd64
- publish_dockerhub_arm64
runs-on: ubuntu-latest
steps:
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Push manifest
run: |
docker manifest create beclab/middleware-operator:${{ github.event.inputs.tags }} --amend beclab/middleware-operator:${{ github.event.inputs.tags }}-amd64 --amend beclab/middleware-operator:${{ github.event.inputs.tags }}-arm64
docker manifest push beclab/middleware-operator:${{ github.event.inputs.tags }}

View File

@@ -0,0 +1,37 @@
name: Publish TAPR s3rver to Dockerhub
on:
workflow_dispatch:
inputs:
tags:
description: 'Release Tags'
jobs:
update_dockerhub:
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Build and push Docker image
uses: docker/build-push-action@v3
with:
push: true
tags: beclab/s3rver:${{ github.event.inputs.tags }}
file: platform/tapr/docker/middleware/Dockerfile.s3rver
context: platform/tapr
platforms: linux/amd64, linux/arm64

View File

@@ -0,0 +1,62 @@
name: Publish TAPR sys-event to Dockerhub
on:
workflow_dispatch:
inputs:
tags:
description: 'Release Tags'
jobs:
publish_dockerhub_amd64:
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Build and push amd64 Docker image
uses: docker/build-push-action@v3
with:
push: true
tags: beclab/sys-event:${{ github.event.inputs.tags }}-amd64
file: platform/tapr/docker/sys-event/Dockerfile
context: platform/tapr
platforms: linux/amd64
publish_dockerhub_arm64:
runs-on: self-hosted
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Build and push arm64 Docker image
uses: docker/build-push-action@v3
with:
push: true
tags: beclab/sys-event:${{ github.event.inputs.tags }}-arm64
file: platform/tapr/docker/sys-event/Dockerfile
context: platform/tapr
platforms: linux/arm64
publish_manifest:
needs:
- publish_dockerhub_amd64
- publish_dockerhub_arm64
runs-on: ubuntu-latest
steps:
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Push manifest
run: |
docker manifest create beclab/sys-event:${{ github.event.inputs.tags }} --amend beclab/sys-event:${{ github.event.inputs.tags }}-amd64 --amend beclab/sys-event:${{ github.event.inputs.tags }}-arm64
docker manifest push beclab/sys-event:${{ github.event.inputs.tags }}

View File

@@ -0,0 +1,37 @@
name: Publish TAPR secret-vault to Dockerhub
on:
workflow_dispatch:
inputs:
tags:
description: 'Release Tags'
jobs:
update_dockerhub:
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Build and push Docker image
uses: docker/build-push-action@v3
with:
push: true
tags: beclab/secret-vault:${{ github.event.inputs.tags }}
file: platform/tapr/docker/vault/Dockerfile
context: platform/tapr
platforms: linux/amd64, linux/arm64

View File

@@ -0,0 +1,37 @@
name: Publish TAPR ws-gateway to Dockerhub
on:
workflow_dispatch:
inputs:
tags:
description: 'Release Tags'
jobs:
update_dockerhub:
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Build and push Docker image
uses: docker/build-push-action@v3
with:
push: true
tags: beclab/ws-gateway:${{ github.event.inputs.tags }}
file: platform/tapr/docker/ws-gateway/Dockerfile
context: platform/tapr
platforms: linux/amd64, linux/arm64

View File

@@ -318,7 +318,7 @@ spec:
chown -R 1000:1000 /uploadstemp && \
chown -R 1000:1000 /appdata
- name: olares-app-init
image: beclab/system-frontend:v1.6.16
image: beclab/system-frontend:v1.6.23
imagePullPolicy: IfNotPresent
command:
- /bin/sh
@@ -440,7 +440,7 @@ spec:
- name: NATS_SUBJECT_VAULT
value: os.vault.{{ .Values.bfl.username}}
- name: user-service
image: beclab/user-service:v0.0.73
image: beclab/user-service:v0.0.78
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000

View File

@@ -17,7 +17,7 @@ for mod in "${PACKAGE_MODULE[@]}";do
chart_path="${mod}/${app}"
if [ -d $chart_path ]; then
find $chart_path -type f -name *.yaml | while read p; do
find $chart_path -type f -path '*/.olares/*.yaml' | while read p; do
bash ${BASE_DIR}/yaml2prop.sh -f $p | while read l;do
if [[ "$l" == *".image = "* || "$l" == "output.containers."*".name"* ]]; then
echo "$l"
@@ -32,8 +32,7 @@ for mod in "${PACKAGE_MODULE[@]}";do
fi
done
done
awk '{print $3}' ${TMP_MANIFEST} | sort | uniq | grep -v nitro | grep -v orion >> ${IMAGE_MANIFEST}
awk '{print $3}' ${TMP_MANIFEST} | sort | uniq | grep -v nitro | grep -v orion | grep -v '^nonexisting$' >> ${IMAGE_MANIFEST}
# patch
# fix backup server version

View File

@@ -0,0 +1,170 @@
package user
import (
"bytes"
"context"
"crypto/md5"
"encoding/hex"
"encoding/json"
"fmt"
"io"
"log"
"net/http"
"os"
"time"
authv1 "k8s.io/api/authentication/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"github.com/spf13/cobra"
)
const (
resetNamespace = "os-framework"
resetServiceAccount = "olares-cli-sa"
resetServiceName = "auth-provider-svc"
resetServicePortName = "server"
defaultServicePort = 28080
passwordSaltSuffix = "@Olares2025"
authHeaderBearer = "Bearer "
cliAuthHeader = "Olares-CLI-Authorization"
resetRequestPathTmpl = "http://%s:%d/cli/api/reset/%s/password"
)
type resetPasswordOptions struct {
username string
password string
kubeConfig string
}
func NewCmdResetPassword() *cobra.Command {
o := &resetPasswordOptions{}
cmd := &cobra.Command{
Use: "reset-password {username}",
Short: "forcefully reset a user's password via auth-provider",
Args: cobra.ExactArgs(1),
Run: func(cmd *cobra.Command, args []string) {
o.username = args[0]
if err := o.Validate(); err != nil {
log.Fatal(err)
}
if err := o.Run(cmd.Context()); err != nil {
log.Fatal(err)
}
},
}
o.AddFlags(cmd)
return cmd
}
func (o *resetPasswordOptions) AddFlags(cmd *cobra.Command) {
cmd.Flags().StringVarP(&o.password, "password", "p", "", "new password to set")
cmd.Flags().StringVar(&o.kubeConfig, "kubeconfig", "", "path to kubeconfig file (optional)")
}
func (o *resetPasswordOptions) Validate() error {
if o.username == "" {
return fmt.Errorf("username is required")
}
if o.password == "" {
return fmt.Errorf("password is required")
}
return nil
}
func (o *resetPasswordOptions) Run(ctx context.Context) error {
cfgPath := o.kubeConfig
if cfgPath == "" {
cfgPath = os.Getenv("KUBECONFIG")
if cfgPath == "" {
cfgPath = clientcmd.RecommendedHomeFile
}
}
restCfg, err := clientcmd.BuildConfigFromFlags("", cfgPath)
if err != nil {
return fmt.Errorf("failed to load kubeconfig: %w", err)
}
k8s, err := kubernetes.NewForConfig(restCfg)
if err != nil {
return fmt.Errorf("failed to create k8s client: %w", err)
}
expires := int64(3600)
tr := &authv1.TokenRequest{
Spec: authv1.TokenRequestSpec{
ExpirationSeconds: &expires,
},
}
tokenResp, err := k8s.CoreV1().ServiceAccounts(resetNamespace).CreateToken(ctx, resetServiceAccount, tr, metav1.CreateOptions{})
if err != nil {
return fmt.Errorf("failed to create service account token: %w", err)
}
saToken := tokenResp.Status.Token
if saToken == "" {
return fmt.Errorf("received empty token for service account %s/%s", resetNamespace, resetServiceAccount)
}
svc, err := k8s.CoreV1().Services(resetNamespace).Get(ctx, resetServiceName, metav1.GetOptions{})
if err != nil {
return fmt.Errorf("failed to get service %s/%s: %w", resetNamespace, resetServiceName, err)
}
clusterIP := svc.Spec.ClusterIP
port := int32(defaultServicePort)
if len(svc.Spec.Ports) > 0 {
chosen := svc.Spec.Ports[0].Port
for _, p := range svc.Spec.Ports {
if p.Name == resetServicePortName {
chosen = p.Port
break
}
}
port = chosen
}
if clusterIP == "" {
return fmt.Errorf("service %s/%s has empty ClusterIP", resetNamespace, resetServiceName)
}
url := fmt.Sprintf(resetRequestPathTmpl, clusterIP, port, o.username)
bodyMap := map[string]string{
"password": saltedMD5(o.password),
}
payload, err := json.Marshal(bodyMap)
if err != nil {
return fmt.Errorf("failed to marshal request body: %w", err)
}
req, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(payload))
if err != nil {
return fmt.Errorf("failed to create http request: %w", err)
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Authorization", authHeaderBearer+saToken)
req.Header.Set(cliAuthHeader, authHeaderBearer+saToken)
req.Header.Set("X-Forwarded-Host", fmt.Sprintf("%s.%s:%d", resetServiceName, resetNamespace, port))
httpClient := &http.Client{Timeout: 10 * time.Second}
resp, err := httpClient.Do(req)
if err != nil {
return fmt.Errorf("reset password request failed: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
body, _ := io.ReadAll(resp.Body)
codeText := http.StatusText(resp.StatusCode)
if len(body) > 0 {
return fmt.Errorf("reset password failed: %d(%s), %s", resp.StatusCode, codeText, string(body))
}
return fmt.Errorf("reset password failed: %d(%s)", resp.StatusCode, codeText)
}
fmt.Printf("Password for user '%s' reset successfully\n", o.username)
return nil
}
func saltedMD5(s string) string {
sum := md5.Sum([]byte(s + passwordSaltSuffix))
return hex.EncodeToString(sum[:])
}

View File

@@ -12,6 +12,7 @@ func NewUserCommand() *cobra.Command {
cmd.AddCommand(NewCmdListUsers())
cmd.AddCommand(NewCmdGetUser())
cmd.AddCommand(NewCmdActivateUser())
cmd.AddCommand(NewCmdResetPassword())
// cmd.AddCommand(NewCmdUpdateUserLimits())
return cmd
}

View File

@@ -25,7 +25,6 @@ const (
DefaultK3sVersion = "v1.33.3-k3s"
DefaultKubernetesVersion = ""
DefaultKubeSphereVersion = "v3.3.0"
DefaultTokenMaxAge = 31536000
CurrentVerifiedCudaVersion = "13.0"
)
@@ -236,7 +235,6 @@ const (
CacheNodeNum = "node_num"
CacheRedisPassword = "redis_password"
CacheSecretsNum = "secrets_num"
CacheJwtSecret = "jwt_secret"
CacheCrdsNUm = "users_iam_num"
CacheMinioPath = "minio_binary_path"
@@ -295,7 +293,6 @@ const (
ENV_BACKUP_SECRET = "BACKUP_SECRET"
ENV_CLUSTER_ID = "CLUSTER_ID"
ENV_BACKUP_CLUSTER_BUCKET = "BACKUP_CLUSTER_BUCKET"
ENV_TOKEN_MAX_AGE = "TOKEN_MAX_AGE"
ENV_HOST_IP = "HOST_IP"
ENV_PREINSTALL = "PREINSTALL"
ENV_DISABLE_HOST_IP_PROMPT = "DISABLE_HOST_IP_PROMPT"

View File

@@ -101,7 +101,6 @@ type Argument struct {
Storage *Storage `json:"storage"`
NetworkSettings *NetworkSettings `json:"network_settings"`
GPU *GPU `json:"gpu"`
TokenMaxAge int64 `json:"token_max_age"` // nanosecond
Request any `json:"-"`
@@ -353,15 +352,6 @@ func (a *Argument) SetOlaresCDNService(url string) {
a.OlaresCDNService = u
}
func (a *Argument) SetTokenMaxAge() {
s := os.Getenv(ENV_TOKEN_MAX_AGE)
age, err := strconv.ParseInt(s, 10, 64)
if err != nil || age == 0 {
age = DefaultTokenMaxAge
}
a.TokenMaxAge = age
}
func (a *Argument) SetGPU(enable bool) {
if a.GPU == nil {
a.GPU = new(GPU)

View File

@@ -6,9 +6,9 @@ const (
NamespaceKubePublic = "kube-public"
NamespaceKubeSystem = "kube-system"
NamespaceKubekeySystem = "kubekey-system"
NamespaceKubesphereControlsSystem = "kubesphere-controls-system"
NamespaceKubesphereMonitoringSystem = "kubesphere-monitoring-system"
NamespaceKubesphereSystem = "kubesphere-system"
NamespaceKubesphereControlsSystem = "kubesphere-controls-system"
NamespaceOsFramework = "os-framework"
NamespaceOsPlatform = "os-platform"

View File

@@ -1,109 +0,0 @@
package common
import (
"fmt"
"strconv"
"github.com/beclab/Olares/cli/pkg/core/connector"
"github.com/beclab/Olares/cli/pkg/core/prepare"
"github.com/beclab/Olares/cli/pkg/core/util"
"github.com/pkg/errors"
)
type Skip struct {
KubePrepare
Not bool
}
func (p *Skip) PreCheck(runtime connector.Runtime) (bool, error) {
return !p.Not, nil
}
type Stop struct {
prepare.BasePrepare
}
func (p *Stop) PreCheck(runtime connector.Runtime) (bool, error) {
return true, nil
// return false, fmt.Errorf("STOP !!!!!!")
}
type GetCommandKubectl struct {
prepare.BasePrepare
}
func (p *GetCommandKubectl) PreCheck(runtime connector.Runtime) (bool, error) {
cmd, err := runtime.GetRunner().SudoCmd(fmt.Sprintf("command -v %s", CommandKubectl), false, false)
if err != nil {
return true, nil
}
if cmd != "" {
p.PipelineCache.Set(CacheKubectlKey, cmd)
}
return true, nil
}
type GetMasterNum struct {
prepare.BasePrepare
}
func (p *GetMasterNum) PreCheck(runtime connector.Runtime) (bool, error) {
var kubectlpath, err = util.GetCommand(CommandKubectl)
if err != nil {
return false, fmt.Errorf("kubectl not found")
}
var cmd = fmt.Sprintf("%s get node | awk '{if(NR>1){print $3}}' | grep master | wc -l", kubectlpath)
stdout, err := runtime.GetRunner().SudoCmd(cmd, false, false)
if err != nil {
return false, errors.Wrap(errors.WithStack(err), "get master num failed")
}
masterNum, _ := strconv.ParseInt(stdout, 10, 64)
p.PipelineCache.Set(CacheMasterNum, masterNum)
return true, nil
}
type GetNodeNum struct {
prepare.BasePrepare
}
func (p *GetNodeNum) PreCheck(runtime connector.Runtime) (bool, error) {
var kubectlpath, err = util.GetCommand(CommandKubectl)
if err != nil {
return false, fmt.Errorf("kubectl not found")
}
var cmd = fmt.Sprintf("%s get node | wc -l", kubectlpath)
stdout, err := runtime.GetRunner().SudoCmd(cmd, false, false)
if err != nil {
return false, errors.Wrap(errors.WithStack(err), "get node num failed")
}
nodeNum, _ := strconv.ParseInt(stdout, 10, 64)
p.PipelineCache.Set(CacheNodeNum, nodeNum)
return true, nil
}
type ClusterType struct {
KubePrepare
ClusterType string
Not bool
}
func (p *ClusterType) PreCheck(runtime connector.Runtime) (bool, error) {
if p.KubeConf == nil || p.KubeConf.Cluster == nil {
return false, nil
}
var isK3s = p.KubeConf.Cluster.Kubernetes.Type == p.ClusterType
if p.Not {
return !isK3s, nil
}
return isK3s, nil
}

View File

@@ -1,63 +0,0 @@
package util
import (
"crypto/hmac"
"crypto/sha256"
"encoding/base64"
"encoding/json"
"fmt"
"strings"
)
var header = JWTHeader{
Alg: "HS256",
Typ: "JWT",
}
var payload = JWTPayload{
Email: "admin@kubesphere.io",
Username: "admin",
TokenType: "static_token",
}
type JWTHeader struct {
Alg string `json:"alg"`
Typ string `json:"typ"`
}
type JWTPayload struct {
Email string `json:"email"`
Username string `json:"username"`
TokenType string `json:"token_type"`
}
func EncryptToken(secret string) (string, error) {
headerJson, _ := json.Marshal(header)
headerBase64 := Base64URLEncode(headerJson)
payloadJson, _ := json.Marshal(payload)
payloadBase64 := Base64URLEncode(payloadJson)
headerPayload := fmt.Sprintf("%s.%s", headerBase64, payloadBase64)
var secretBytes = []byte(secret)
signature := HMACSHA256([]byte(headerPayload), secretBytes)
// Encode the signature to base64 URL encoding.
signatureBase64 := Base64URLEncode(signature)
return fmt.Sprintf("%s.%s", headerPayload, signatureBase64), nil
}
func Base64URLEncode(data []byte) string {
return strings.TrimRight(base64.URLEncoding.EncodeToString(data), "=")
}
// HMACSHA256 signs a message using a secret key with HMAC SHA256.
func HMACSHA256(message, secret []byte) []byte {
h := hmac.New(sha256.New, secret)
h.Write(message)
return h.Sum(nil)
}

View File

@@ -1,16 +0,0 @@
package util
import (
"fmt"
"testing"
)
func TestToken(t *testing.T) {
var a = "n7X2dggXApH91fnVUzgPr1Fr1vAO0Upo"
// var b = `{"email": "admin@kubesphere.io","username": "admin","token_type": "static_token"}`
// eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJlbWFpbCI6ImFkbWluQGt1YmVzcGhlcmUuaW8iLCJ1c2VybmFtZSI6ImFkbWluIiwidG9rZW5fdHlwZSI6InN0YXRpY190b2tlbiJ9.iwsRH37tcqE8HyI_S98AEM6KUH7bVdxDasR3V8QasXI
var data, _ = EncryptToken(a)
fmt.Println("---data---", data)
}

View File

@@ -74,7 +74,6 @@ func (g *GenerateTerminusdServiceEnv) Execute(runtime connector.Runtime) error {
"RegistryMirrors": g.KubeConf.Arg.RegistryMirrors,
"BaseDir": baseDir,
"GpuEnable": utils.FormatBoolToInt(g.KubeConf.Arg.GPU.Enable),
"TokenMaxAge": g.KubeConf.Arg.TokenMaxAge,
},
PrintContent: true,
}

View File

@@ -14,5 +14,4 @@ KUBE_TYPE={{ .KubeType }}
REGISTRY_MIRRORS={{ .RegistryMirrors }}
BASE_DIR={{ .BaseDir }}
LOCAL_GPU_ENABLE={{ .GpuEnable }}
TOKEN_MAX_AGE={{ .TokenMaxAge }}
`)))

View File

@@ -730,6 +730,10 @@ func (t *WriteNouveauBlacklist) Execute(runtime connector.Runtime) error {
return errors.Wrap(errors.WithStack(err), "failed to install nouveau blacklist file")
}
if _, err := runtime.GetRunner().SudoCmd("update-initramfs -u", false, false); err != nil {
return errors.Wrap(errors.WithStack(err), "failed to update initramfs")
}
if out, _ := runtime.GetRunner().SudoCmd("test -d /sys/module/nouveau && echo loaded || true", false, false); strings.TrimSpace(out) == "loaded" {
logger.Infof("the disable file for nouveau kernel module has been written, but the nouveau kernel module is currently loaded. Please REBOOT your machine to make the disabling effective.")
os.Exit(0)

View File

@@ -3,13 +3,14 @@ package kubesphere
import (
"encoding/json"
"fmt"
"github.com/beclab/Olares/cli/pkg/storage"
"os"
"os/exec"
"path"
"path/filepath"
"strings"
"github.com/beclab/Olares/cli/pkg/storage"
"github.com/containerd/containerd/plugin"
"github.com/pelletier/go-toml"
@@ -470,6 +471,7 @@ func (t *InitMinikubeNs) Execute(runtime connector.Runtime) error {
common.NamespaceKubekeySystem,
common.NamespaceKubesphereSystem,
common.NamespaceKubesphereMonitoringSystem,
common.NamespaceKubesphereControlsSystem,
}
for _, ns := range allNs {

View File

@@ -17,12 +17,9 @@
package kubesphere
import (
"path/filepath"
"time"
"github.com/beclab/Olares/cli/pkg/core/action"
"github.com/beclab/Olares/cli/pkg/core/logger"
"github.com/beclab/Olares/cli/pkg/version/kubesphere/templates"
"github.com/beclab/Olares/cli/pkg/common"
"github.com/beclab/Olares/cli/pkg/core/prepare"
@@ -61,81 +58,19 @@ func (d *DeployModule) Init() {
d.Name = "DeployKubeSphereModule"
d.Desc = "Deploy KubeSphere"
generateManifests := &task.RemoteTask{
Name: "GenerateKsInstallerCRD",
Desc: "Generate KubeSphere ks-installer crd manifests",
Hosts: d.Runtime.GetHostsByRole(common.Master),
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(NotEqualDesiredVersion),
},
Action: &action.Template{
Name: "GenerateKsInstallerCRD",
Template: templates.KsInstaller,
Dst: filepath.Join(common.KubeAddonsDir, templates.KsInstaller.Name()),
},
Parallel: false,
}
addConfig := &task.RemoteTask{
Name: "AddKsInstallerConfig",
Desc: "Add config to ks-installer manifests",
Hosts: d.Runtime.GetHostsByRole(common.Master),
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(NotEqualDesiredVersion),
},
Action: new(AddInstallerConfig),
Parallel: false,
}
createNamespace := &task.RemoteTask{
Name: "CreateKubeSphereNamespace",
Desc: "Create the kubesphere namespace",
Hosts: d.Runtime.GetHostsByRole(common.Master),
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(NotEqualDesiredVersion),
},
Action: new(CreateNamespace),
Parallel: false,
}
setup := &task.RemoteTask{
Name: "SetupKsInstallerConfig",
Desc: "Setup ks-installer config",
Hosts: d.Runtime.GetHostsByRole(common.Master),
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(NotEqualDesiredVersion),
},
Action: new(Setup), // todo
Parallel: false,
Retry: 1,
}
apply := &task.RemoteTask{
Name: "ApplyKsInstaller",
Desc: "Apply ks-installer",
Hosts: d.Runtime.GetHostsByRole(common.Master),
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(NotEqualDesiredVersion),
},
Action: new(Apply),
Parallel: false,
Retry: 10,
Delay: 5 * time.Second,
}
d.Tasks = []task.Interface{
generateManifests,
// apply crd installer.kubesphere.io/v1alpha1
// apply,
addConfig,
createNamespace,
setup,
apply,
}
}
@@ -157,7 +92,6 @@ func (c *CheckResultModule) Init() {
Hosts: c.Runtime.GetHostsByRole(common.Master),
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(NotEqualDesiredVersion),
},
Action: new(Check),
Parallel: false,
@@ -170,7 +104,6 @@ func (c *CheckResultModule) Init() {
Hosts: c.Runtime.GetHostsByRole(common.Master),
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(NotEqualDesiredVersion),
},
Action: new(GetKubeCommand),
Parallel: false,

File diff suppressed because one or more lines are too long

View File

@@ -1,243 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:kubesphere-router-clusterrole
annotations:
kubernetes.io/created-by: kubesphere.io/ks-router
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
- namespaces
verbs:
- list
- watch
- get
- update
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: system:kubesphere-router-role
namespace: kubesphere-controls-system
annotations:
kubernetes.io/created-by: kubesphere.io/ks-router
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubesphere-router-serviceaccount
namespace: kubesphere-controls-system
annotations:
kubernetes.io/created-by: kubesphere.io/ks-router
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:nginx-ingress-clusterrole-nisa-binding
annotations:
kubernetes.io/created-by: kubesphere.io/ks-router
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kubesphere-router-clusterrole
subjects:
- kind: ServiceAccount
name: kubesphere-router-serviceaccount
namespace: kubesphere-controls-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: kubesphere-controls-system
annotations:
kubernetes.io/created-by: kubesphere.io/ks-router
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: system:kubesphere-router-role
subjects:
- kind: ServiceAccount
name: kubesphere-router-serviceaccount
namespace: kubesphere-controls-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: default-http-backend
namespace: kubesphere-controls-system
labels:
app: kubesphere
component: kubesphere-router
version: express-1.0.alpha
annotations:
kubernetes.io/created-by: kubesphere.io/ks-router
spec:
replicas: 1
selector:
matchLabels:
app: kubesphere
component: kubesphere-router
template:
metadata:
labels:
app: kubesphere
component: kubesphere-router
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissible as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: {{ .Values.image.defaultbackend_repo }}:{{ .Values.image.defaultbackend_tag | default "latest" }}
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: kubesphere-controls-system
labels:
app: kubesphere
component: kubesphere-router
annotations:
kubernetes.io/created-by: kubesphere.io/ks-router
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: kubesphere
component: kubesphere-router
---
# create a seviceaccount for kubectl pod
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubesphere-cluster-admin
namespace: kubesphere-controls-system
annotations:
kubernetes.io/created-by: kubesphere.io/kubectl
---
# bind kubesphere-cluster-admin sa to clusterrole cluster-admin
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kubesphere-cluster-admin
annotations:
kubernetes.io/created-by: kubesphere.io/kubectl
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubesphere-cluster-admin
namespace: kubesphere-controls-system

View File

@@ -1,466 +0,0 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.6.1
creationTimestamp: null
name: clusterdashboards.monitoring.kubesphere.io
spec:
group: monitoring.kubesphere.io
names:
kind: ClusterDashboard
listKind: ClusterDashboardList
plural: clusterdashboards
singular: clusterdashboard
scope: Cluster
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
description: ClusterDashboard is the Schema for the culsterdashboards API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: DashboardSpec defines the desired state of Dashboard
properties:
datasource:
description: Dashboard datasource
type: string
description:
description: Dashboard description
type: string
panels:
description: Collection of panels. Panel is one of [Row](row.md),
[Singlestat](#singlestat.md) or [Graph](graph.md)
items:
description: Supported panel
properties:
bars:
description: A collection of queries Targets []Target `json:"targets,omitempty"`
Display as a bar chart
type: boolean
colors:
description: Set series color
items:
type: string
type: array
decimals:
description: Name of the signlestat panel Title string `json:"title,omitempty"`
Must be `singlestat` Type string `json:"type"` Panel ID Id
int64 `json:"id,omitempty"` A collection of queries Targets
[]Target `json:"targets,omitempty"` Limit the decimal numbers
format: int64
type: integer
description:
description: Name of the graph panel Title string `json:"title,omitempty"`
Must be `graph` Type string `json:"type"` Panel ID Id int64
`json:"id,omitempty"` Panel description
type: string
format:
description: Display unit
type: string
id:
description: Panel ID
format: int64
type: integer
lines:
description: Display as a line chart
type: boolean
stack:
description: Display as a stacked chart
type: boolean
targets:
description: A collection of queries Only for panels with `graph`
or `singlestat` type
items:
description: Query editor options
properties:
expr:
description: Input for fetching metrics.
type: string
legendFormat:
description: Legend format for outputs. You can make a
dynamic legend with templating variables.
type: string
refId:
description: Reference ID
format: int64
type: integer
step:
description: Set series time interval
type: string
type: object
type: array
title:
description: Name of the panel
type: string
type:
description: Panel Type, one of `row`, `graph`, `singlestat`
type: string
yaxes:
description: Y-axis options
items:
properties:
decimals:
description: Limit the decimal numbers
format: int64
type: integer
format:
description: Display unit
type: string
type: object
type: array
required:
- type
type: object
type: array
templating:
description: Templating variables
items:
description: Templating defines a variable, which can be used as
a placeholder in query
properties:
name:
description: Variable name
type: string
query:
description: Set variable values to be the return result of
the query
type: string
type: object
type: array
time:
description: Time range for display
properties:
from:
description: Start time in the format of `^now([+-][0-9]+[smhdwMy])?$`,
eg. `now-1M`. It denotes the end time is set to the last month
since now.
type: string
to:
description: End time in the format of `^now([+-][0-9]+[smhdwMy])?$`,
eg. `now-1M`. It denotes the start time is set to the last month
since now.
type: string
type: object
title:
description: Dashboard title
type: string
type: object
type: object
served: true
storage: false
- name: v1alpha2
schema:
openAPIV3Schema:
description: ClusterDashboard is the Schema for the culsterdashboards API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: DashboardSpec defines the desired state of Dashboard
properties:
annotations:
description: Annotations
items:
properties:
datasource:
type: string
enable:
type: boolean
expr:
type: string
iconColor:
type: string
iconSize:
type: integer
lineColor:
type: string
name:
type: string
query:
type: string
showLine:
type: boolean
step:
type: string
tagKeys:
type: string
tags:
items:
type: string
type: array
tagsField:
type: string
textField:
type: string
textFormat:
type: string
titleFormat:
type: string
type:
type: string
type: object
type: array
auto_refresh:
type: string
description:
type: string
editable:
type: boolean
id:
type: integer
panels:
items:
properties:
bars:
description: Display as a bar chart
type: boolean
colors:
description: Set series color
items:
type: string
type: array
content:
type: string
datasource:
description: Datasource
type: string
decimals:
format: int64
type: integer
description:
description: Description
type: string
format:
description: Display unit
type: string
gauge:
description: gauge
properties:
maxValue:
format: int64
type: integer
minValue:
format: int64
type: integer
show:
type: boolean
thresholdLabels:
type: boolean
thresholdMarkers:
type: boolean
type: object
height:
description: Height
type: string
id:
description: Panel ID
format: int64
type: integer
legend:
description: legend
items:
type: string
type: array
lines:
description: Display as a line chart
type: boolean
mode:
type: string
options:
properties:
colorMode:
type: string
content:
type: string
displayMode:
type: string
graphMode:
type: string
justifyMode:
type: string
mode:
type: string
orientation:
type: string
textMode:
type: string
type: object
scroll:
type: boolean
sort:
properties:
col:
type: integer
desc:
type: boolean
type: object
sparkline:
description: 'spark line: full or bottom'
type: string
stack:
description: Display as a stacked chart
type: boolean
targets:
description: A collection of queries
items:
description: Query editor options Referers to https://pkg.go.dev/github.com/grafana-tools/sdk#Target
properties:
expr:
description: 'only support prometheus,and the corresponding
fields are as follows: Input for fetching metrics.'
type: string
legendFormat:
description: Legend format for outputs. You can make a
dynamic legend with templating variables.
type: string
refId:
description: Reference ID
format: int64
type: integer
step:
description: Set series time interval
type: string
type: object
type: array
title:
description: Name of the panel
type: string
type:
description: Type of the panel
type: string
valueName:
description: value name
type: string
xaxis:
properties:
decimals:
description: Limit the decimal numbers
format: int64
type: integer
format:
description: Display unit
type: string
type: object
yaxes:
description: Y-axis options
items:
properties:
decimals:
description: Limit the decimal numbers
format: int64
type: integer
format:
description: Display unit
type: string
type: object
type: array
type: object
type: array
shared_crosshair:
type: boolean
tags:
items:
type: string
type: array
templatings:
description: // Templating variables
items:
properties:
allFormat:
type: string
allValue:
type: string
auto:
type: boolean
auto_count:
type: integer
datasource:
type: string
hide:
type: integer
includeAll:
type: boolean
label:
type: string
multi:
type: boolean
multiFormat:
type: string
name:
type: string
options:
items:
properties:
selected:
type: boolean
text:
type: string
value:
type: string
type: object
type: array
query:
type: string
regex:
type: string
sort:
type: integer
type:
type: string
type: object
type: array
time:
description: Time range
properties:
from:
description: Start time in the format of `^now([+-][0-9]+[smhdwMy])?$`,
eg. `now-1M`. It denotes the end time is set to the last month
since now.
type: string
to:
description: End time in the format of `^now([+-][0-9]+[smhdwMy])?$`,
eg. `now-1M`. It denotes the start time is set to the last month
since now.
type: string
type: object
timezone:
type: string
title:
type: string
uid:
type: string
type: object
type: object
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -1,470 +0,0 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.6.1
creationTimestamp: null
name: dashboards.monitoring.kubesphere.io
spec:
group: monitoring.kubesphere.io
names:
kind: Dashboard
listKind: DashboardList
plural: dashboards
singular: dashboard
scope: Namespaced
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
description: Dashboard is the Schema for the dashboards API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: DashboardSpec defines the desired state of Dashboard
properties:
datasource:
description: Dashboard datasource
type: string
description:
description: Dashboard description
type: string
panels:
description: Collection of panels. Panel is one of [Row](row.md),
[Singlestat](#singlestat.md) or [Graph](graph.md)
items:
description: Supported panel
properties:
bars:
description: A collection of queries Targets []Target `json:"targets,omitempty"`
Display as a bar chart
type: boolean
colors:
description: Set series color
items:
type: string
type: array
decimals:
description: Name of the signlestat panel Title string `json:"title,omitempty"`
Must be `singlestat` Type string `json:"type"` Panel ID Id
int64 `json:"id,omitempty"` A collection of queries Targets
[]Target `json:"targets,omitempty"` Limit the decimal numbers
format: int64
type: integer
description:
description: Name of the graph panel Title string `json:"title,omitempty"`
Must be `graph` Type string `json:"type"` Panel ID Id int64
`json:"id,omitempty"` Panel description
type: string
format:
description: Display unit
type: string
id:
description: Panel ID
format: int64
type: integer
lines:
description: Display as a line chart
type: boolean
stack:
description: Display as a stacked chart
type: boolean
targets:
description: A collection of queries Only for panels with `graph`
or `singlestat` type
items:
description: Query editor options
properties:
expr:
description: Input for fetching metrics.
type: string
legendFormat:
description: Legend format for outputs. You can make a
dynamic legend with templating variables.
type: string
refId:
description: Reference ID
format: int64
type: integer
step:
description: Set series time interval
type: string
type: object
type: array
title:
description: Name of the panel
type: string
type:
description: Panel Type, one of `row`, `graph`, `singlestat`
type: string
yaxes:
description: Y-axis options
items:
properties:
decimals:
description: Limit the decimal numbers
format: int64
type: integer
format:
description: Display unit
type: string
type: object
type: array
required:
- type
type: object
type: array
templating:
description: Templating variables
items:
description: Templating defines a variable, which can be used as
a placeholder in query
properties:
name:
description: Variable name
type: string
query:
description: Set variable values to be the return result of
the query
type: string
type: object
type: array
time:
description: Time range for display
properties:
from:
description: Start time in the format of `^now([+-][0-9]+[smhdwMy])?$`,
eg. `now-1M`. It denotes the end time is set to the last month
since now.
type: string
to:
description: End time in the format of `^now([+-][0-9]+[smhdwMy])?$`,
eg. `now-1M`. It denotes the start time is set to the last month
since now.
type: string
type: object
title:
description: Dashboard title
type: string
type: object
type: object
served: true
storage: false
subresources:
status: {}
- name: v1alpha2
schema:
openAPIV3Schema:
description: Dashboard is the Schema for the dashboards API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: DashboardSpec defines the desired state of Dashboard
properties:
annotations:
description: Annotations
items:
properties:
datasource:
type: string
enable:
type: boolean
expr:
type: string
iconColor:
type: string
iconSize:
type: integer
lineColor:
type: string
name:
type: string
query:
type: string
showLine:
type: boolean
step:
type: string
tagKeys:
type: string
tags:
items:
type: string
type: array
tagsField:
type: string
textField:
type: string
textFormat:
type: string
titleFormat:
type: string
type:
type: string
type: object
type: array
auto_refresh:
type: string
description:
type: string
editable:
type: boolean
id:
type: integer
panels:
items:
properties:
bars:
description: Display as a bar chart
type: boolean
colors:
description: Set series color
items:
type: string
type: array
content:
type: string
datasource:
description: Datasource
type: string
decimals:
format: int64
type: integer
description:
description: Description
type: string
format:
description: Display unit
type: string
gauge:
description: gauge
properties:
maxValue:
format: int64
type: integer
minValue:
format: int64
type: integer
show:
type: boolean
thresholdLabels:
type: boolean
thresholdMarkers:
type: boolean
type: object
height:
description: Height
type: string
id:
description: Panel ID
format: int64
type: integer
legend:
description: legend
items:
type: string
type: array
lines:
description: Display as a line chart
type: boolean
mode:
type: string
options:
properties:
colorMode:
type: string
content:
type: string
displayMode:
type: string
graphMode:
type: string
justifyMode:
type: string
mode:
type: string
orientation:
type: string
textMode:
type: string
type: object
scroll:
type: boolean
sort:
properties:
col:
type: integer
desc:
type: boolean
type: object
sparkline:
description: 'spark line: full or bottom'
type: string
stack:
description: Display as a stacked chart
type: boolean
targets:
description: A collection of queries
items:
description: Query editor options Referers to https://pkg.go.dev/github.com/grafana-tools/sdk#Target
properties:
expr:
description: 'only support prometheus,and the corresponding
fields are as follows: Input for fetching metrics.'
type: string
legendFormat:
description: Legend format for outputs. You can make a
dynamic legend with templating variables.
type: string
refId:
description: Reference ID
format: int64
type: integer
step:
description: Set series time interval
type: string
type: object
type: array
title:
description: Name of the panel
type: string
type:
description: Type of the panel
type: string
valueName:
description: value name
type: string
xaxis:
properties:
decimals:
description: Limit the decimal numbers
format: int64
type: integer
format:
description: Display unit
type: string
type: object
yaxes:
description: Y-axis options
items:
properties:
decimals:
description: Limit the decimal numbers
format: int64
type: integer
format:
description: Display unit
type: string
type: object
type: array
type: object
type: array
shared_crosshair:
type: boolean
tags:
items:
type: string
type: array
templatings:
description: // Templating variables
items:
properties:
allFormat:
type: string
allValue:
type: string
auto:
type: boolean
auto_count:
type: integer
datasource:
type: string
hide:
type: integer
includeAll:
type: boolean
label:
type: string
multi:
type: boolean
multiFormat:
type: string
name:
type: string
options:
items:
properties:
selected:
type: boolean
text:
type: string
value:
type: string
type: object
type: array
query:
type: string
regex:
type: string
sort:
type: integer
type:
type: string
type: object
type: array
time:
description: Time range
properties:
from:
description: Start time in the format of `^now([+-][0-9]+[smhdwMy])?$`,
eg. `now-1M`. It denotes the end time is set to the last month
since now.
type: string
to:
description: End time in the format of `^now([+-][0-9]+[smhdwMy])?$`,
eg. `now-1M`. It denotes the start time is set to the last month
since now.
type: string
type: object
timezone:
type: string
title:
type: string
uid:
type: string
type: object
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -21,24 +21,6 @@ type CreateKsCore struct {
}
func (t *CreateKsCore) Execute(runtime connector.Runtime) error {
//var kubectlpath, err = util.GetCommand(common.CommandKubectl)
//if err != nil {
// return fmt.Errorf("kubectl not found")
//}
//var cmd = fmt.Sprintf("%s get pod -n %s -l 'app=redis,tier=database,version=redis-4.0' -o jsonpath='{.items[0].status.phase}'", kubectlpath,
// common.NamespaceKubesphereSystem)
//rphase, err := runtime.GetRunner().Host.SudoCmd(cmd, false, false)
//if rphase != "Running" {
// return fmt.Errorf("Redis State %s", rphase)
//}
masterNumIf, ok := t.PipelineCache.Get(common.CacheMasterNum)
if !ok || masterNumIf == nil {
return fmt.Errorf("failed to get master num")
}
masterNum := masterNumIf.(int64)
config, err := ctrl.GetConfig()
if err != nil {
return err
@@ -55,7 +37,7 @@ func (t *CreateKsCore) Execute(runtime connector.Runtime) error {
var values = make(map[string]interface{})
values["Release"] = map[string]string{
"Namespace": common.NamespaceKubesphereSystem,
"ReplicaCount": fmt.Sprintf("%d", masterNum),
"ReplicaCount": fmt.Sprintf("%d", 1),
}
if err := utils.UpgradeCharts(context.Background(), actionConfig, settings, appKsCoreName,
appPath, "", common.NamespaceKubesphereSystem, values, false); err != nil {
@@ -78,7 +60,6 @@ func (m *DeployKsCoreModule) Init() {
Hosts: m.Runtime.GetHostsByRole(common.Master),
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(NotEqualDesiredVersion),
},
Action: new(CreateKsCore),
Parallel: false,

View File

@@ -6,7 +6,6 @@ import (
"os"
"path"
"path/filepath"
"strconv"
"time"
"github.com/beclab/Olares/cli/pkg/common"
@@ -21,129 +20,6 @@ import (
ctrl "sigs.k8s.io/controller-runtime"
)
var kscorecrds = []map[string]string{
{
"ns": "kubesphere-controls-system",
"kind": "serviceaccounts",
"resource": "kubesphere-cluster-admin",
"release": "ks-core",
},
{
"ns": "kubesphere-controls-system",
"kind": "serviceaccounts",
"resource": "kubesphere-router-serviceaccount",
"release": "ks-core",
},
{
"ns": "kubesphere-controls-system",
"kind": "role",
"resource": "system:kubesphere-router-role",
"release": "ks-core",
},
{
"ns": "kubesphere-controls-system",
"kind": "rolebinding",
"resource": "nginx-ingress-role-nisa-binding",
"release": "ks-core",
},
{
"ns": "kubesphere-controls-system",
"kind": "deployment",
"resource": "default-http-backend",
"release": "ks-core",
},
{
"ns": "kubesphere-controls-system",
"kind": "service",
"resource": "default-http-backend",
"release": "ks-core",
},
//{
// "ns": "kubesphere-system",
// "kind": "secrets",
// "resource": "ks-controller-manager-webhook-cert",
// "release": "ks-core",
//},
{
"ns": "kubesphere-system",
"kind": "serviceaccounts",
"resource": "kubesphere",
"release": "ks-core",
},
{
"ns": "kubesphere-system",
"kind": "clusterroles",
"resource": "system:kubesphere-router-clusterrole",
"release": "ks-core",
},
{
"ns": "kubesphere-system",
"kind": "clusterrolebindings",
"resource": "system:nginx-ingress-clusterrole-nisa-binding",
"release": "ks-core",
},
{
"ns": "kubesphere-system",
"kind": "clusterrolebindings",
"resource": "system:kubesphere-cluster-admin",
"release": "ks-core",
},
{
"ns": "kubesphere-system",
"kind": "clusterrolebindings",
"resource": "kubesphere",
"release": "ks-core",
},
{
"ns": "kubesphere-system",
"kind": "services",
"resource": "ks-apiserver",
"release": "ks-core",
},
//{
// "ns": "kubesphere-system",
// "kind": "services",
// "resource": "ks-controller-manager",
// "release": "ks-core",
//},
{
"ns": "kubesphere-system",
"kind": "deployments",
"resource": "ks-apiserver",
"release": "ks-core",
},
//{
// "ns": "kubesphere-system",
// "kind": "deployments",
// "resource": "ks-controller-manager",
// "release": "ks-core",
//},
//{
// "ns": "kubesphere-system",
// "kind": "validatingwebhookconfigurations",
// "resource": "users.iam.kubesphere.io",
// "release": "ks-core",
//},
{
"ns": "kubesphere-system",
"kind": "validatingwebhookconfigurations",
"resource": "resourcesquotas.quota.kubesphere.io",
"release": "ks-core",
},
{
"ns": "kubesphere-system",
"kind": "validatingwebhookconfigurations",
"resource": "network.kubesphere.io",
"release": "ks-core",
},
{
"ns": "kubesphere-system",
"kind": "users.iam.kubesphere.io",
"resource": "admin",
"release": "ks-core",
},
}
type CreateKsRole struct {
common.KubeAction
}
@@ -167,42 +43,11 @@ func (t *CreateKsRole) Execute(runtime connector.Runtime) error {
return nil
}
type PatchKsCoreStatus struct {
common.KubeAction
}
func (t *PatchKsCoreStatus) Execute(runtime connector.Runtime) error {
//var kubectlpath, _ = t.PipelineCache.GetMustString(common.CacheCommandKubectlPath)
//if kubectlpath == "" {
// kubectlpath = path.Join(common.BinDir, common.CommandKubectl)
//}
//
//var jsonPath = fmt.Sprintf(`{\"status\": {\"core\": {\"status\": \"enabled\", \"enabledTime\": \"%s\"}}}`, time.Now().Format("2006-01-02T15:04:05Z"))
//var cmd = fmt.Sprintf("%s patch cc ks-installer --type merge -p '%s' -n %s", kubectlpath, jsonPath, common.NamespaceKubesphereSystem)
//
//_, err := runtime.GetRunner().Host.SudoCmd(cmd, false, true)
//if err != nil {
// return errors.Wrap(errors.WithStack(err), "patch ks-core status failed")
//}
return nil
}
type CreateKsCoreConfig struct {
common.KubeAction
}
func (t *CreateKsCoreConfig) Execute(runtime connector.Runtime) error {
jwtSecretIf, ok := t.PipelineCache.Get(common.CacheJwtSecret)
if !ok || jwtSecretIf == nil {
return fmt.Errorf("failed to get jwt secret")
}
kubeVersionIf, ok := t.PipelineCache.Get(common.CacheKubeletVersion)
if !ok || kubeVersionIf == nil {
return fmt.Errorf("failed to get kubelet version")
}
config, err := ctrl.GetConfig()
if err != nil {
return err
@@ -230,13 +75,8 @@ func (t *CreateKsCoreConfig) Execute(runtime connector.Runtime) error {
// create ks-config
var appKsConfigName = common.ChartNameKsConfig
appPath = path.Join(runtime.GetInstallerDir(), cc.BuildFilesCacheDir, cc.BuildDir, appKsConfigName)
values = make(map[string]interface{})
values["Release"] = map[string]interface{}{
"JwtSecret": jwtSecretIf.(string),
"TokenMaxAge": t.KubeConf.Arg.TokenMaxAge * int64(time.Second),
}
if err := utils.UpgradeCharts(context.Background(), actionConfig, settings, appKsConfigName,
appPath, "", common.NamespaceKubesphereSystem, values, false); err != nil {
appPath, "", common.NamespaceKubesphereSystem, nil, false); err != nil {
logger.Errorf("failed to install %s chart: %v", appKsConfigName, err)
return err
}
@@ -273,82 +113,6 @@ func (t *CreateKsCoreConfigManifests) Execute(runtime connector.Runtime) error {
return nil
}
type PacthKsCore struct {
common.KubeAction
}
func (t *PacthKsCore) Execute(runtime connector.Runtime) error {
var secretsNum int64
var crdNum int64
var secretsNumIf, ok = t.PipelineCache.Get(common.CacheSecretsNum)
if ok && secretsNumIf != nil {
secretsNum = secretsNumIf.(int64)
}
crdNumIf, ok := t.PipelineCache.Get(common.CacheCrdsNUm)
if ok && crdNumIf != nil {
crdNum = crdNumIf.(int64)
}
var kubectlpath, err = util.GetCommand(common.CommandKubectl)
if err != nil {
return fmt.Errorf("kubectl not found")
}
if secretsNum == 0 && crdNum != 0 {
for _, item := range kscorecrds {
var cmd = fmt.Sprintf("%s -n %s annotate --overwrite %s %s meta.helm.sh/release-name=%s && %s -n %s annotate --overwrite %s %s meta.helm.sh/release-namespace=%s && %s -n %s label --overwrite %s %s app.kubernetes.io/managed-by=Helm",
kubectlpath, item["ns"], item["kind"], item["resource"], item["release"],
kubectlpath, item["ns"], item["kind"], item["resource"], common.NamespaceKubesphereSystem,
kubectlpath, item["ns"], item["kind"], item["resource"])
if _, err := runtime.GetRunner().SudoCmd(cmd, false, true); err != nil {
return errors.Wrap(errors.WithStack(err), "patch ks-core crd")
}
}
}
return nil
}
type CheckKsCoreExist struct {
common.KubeAction
}
func (t *CheckKsCoreExist) Execute(runtime connector.Runtime) error {
var kubectlpath, err = util.GetCommand(common.CommandKubectl)
if err != nil {
return fmt.Errorf("kubectl not found")
}
var cmd string
cmd = fmt.Sprintf("%s -n %s get secrets --field-selector=type=helm.sh/release.v1 | grep ks-core |wc -l",
kubectlpath,
common.NamespaceKubesphereSystem)
stdout, _ := runtime.GetRunner().SudoCmd(cmd, false, false)
secretNum, err := strconv.ParseInt(stdout, 10, 64)
if err != nil {
secretNum = 0
}
cmd = fmt.Sprintf("%s get crd users.iam.kubesphere.io | grep 'users.iam.kubesphere.io' |wc -l", kubectlpath)
stdout, _ = runtime.GetRunner().SudoCmd(cmd, false, false)
usersCrdNum, err := strconv.ParseInt(stdout, 10, 64)
if err != nil {
usersCrdNum = 0
}
logger.Debugf("secretNum: %d, usersCrdNum: %d", secretNum, usersCrdNum)
t.ModuleCache.Set(common.CacheSecretsNum, secretNum)
t.ModuleCache.Set(common.CacheCrdsNUm, usersCrdNum)
return nil
}
type DeployKsCoreConfigModule struct {
common.KubeModule
}
@@ -356,37 +120,11 @@ type DeployKsCoreConfigModule struct {
func (m *DeployKsCoreConfigModule) Init() {
m.Name = "DeployKsCoreConfig"
checkKsCoreExist := &task.RemoteTask{
Name: "CheckKsCoreExist",
Hosts: m.Runtime.GetHostsByRole(common.Master),
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(NotEqualDesiredVersion),
new(common.GetMasterNum),
},
Action: new(CheckKsCoreExist),
Parallel: false,
Retry: 0,
}
pacthKsCore := &task.RemoteTask{
Name: "PacthKsCore",
Hosts: m.Runtime.GetHostsByRole(common.Master),
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(NotEqualDesiredVersion),
},
Action: new(PacthKsCore),
Parallel: false,
Retry: 0,
}
createKsCoreConfigManifests := &task.RemoteTask{
Name: "CreateKsCoreConfigManifests",
Hosts: m.Runtime.GetHostsByRole(common.Master),
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(NotEqualDesiredVersion),
},
Action: new(CreateKsCoreConfigManifests),
Parallel: false,
@@ -399,31 +137,17 @@ func (m *DeployKsCoreConfigModule) Init() {
Hosts: m.Runtime.GetHostsByRole(common.Master),
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(NotEqualDesiredVersion),
},
Action: new(CreateKsCoreConfig),
Parallel: true,
Retry: 0,
}
patchKsCoreStatus := &task.RemoteTask{
Name: "PatchKsCoreStatus",
Hosts: m.Runtime.GetHostsByRole(common.Master),
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(NotEqualDesiredVersion),
},
Action: new(PatchKsCoreStatus),
Parallel: true,
Retry: 0,
}
createKsRole := &task.RemoteTask{
Name: "CreateKsRole",
Hosts: m.Runtime.GetHostsByRole(common.Master),
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(NotEqualDesiredVersion),
},
Action: new(CreateKsRole),
Parallel: true,
@@ -431,11 +155,8 @@ func (m *DeployKsCoreConfigModule) Init() {
}
m.Tasks = []task.Interface{
checkKsCoreExist,
pacthKsCore,
createKsCoreConfigManifests,
createKsCoreConfig,
patchKsCoreStatus,
createKsRole,
}
}

View File

@@ -1,17 +0,0 @@
package plugins
// ! Y ./ks-monitor/files/federated --> notification-manager
// ~ N ./ks-monitor/files/gpu-monitoring
// ~ Y ./ks-monitor/files/ks-istio-monitoring
// ! Y ./ks-monitor/files/monitoring-dashboard
// ! Y ./ks-monitor/files/notification-manager
// ! Y ./ks-monitor/files/prometheus/alertmanager
// ~ N ./ks-monitor/files/prometheus/etcd
// ~ N ./ks-monitor/files/prometheus/grafana
// ~ N ./ks-monitor/files/prometheus/kube-prometheus
// ! Y ./ks-monitor/files/prometheus/kube-state-metrics
// ! Y ./ks-monitor/files/prometheus/kubernetes
// ! Y ./ks-monitor/files/prometheus/node-exporter
// ! Y ./ks-monitor/files/prometheus/prometheus
// ! Y ./ks-monitor/files/prometheus/prometheus-operator
// ~ N ./ks-monitor/files/prometheus/thanos-ruler

View File

@@ -1,10 +1,7 @@
package plugins
import (
"time"
"github.com/beclab/Olares/cli/pkg/common"
"github.com/beclab/Olares/cli/pkg/core/prepare"
"github.com/beclab/Olares/cli/pkg/core/task"
)
@@ -24,40 +21,3 @@ func (t *CopyEmbed) Init() {
copyEmbed,
}
}
type DeployKsPluginsModule struct {
common.KubeModule
}
func (t *DeployKsPluginsModule) Init() {
t.Name = "DeployKsPlugins"
checkNodeState := &task.RemoteTask{
Name: "CheckNodeState",
Hosts: t.Runtime.GetHostsByRole(common.Master),
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(NotEqualDesiredVersion),
},
Action: new(CheckNodeState),
Parallel: false,
Retry: 20,
Delay: 10 * time.Second,
}
initNs := &task.RemoteTask{
Name: "InitKsNamespace",
Hosts: t.Runtime.GetHostsByRole(common.Master),
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(NotEqualDesiredVersion),
},
Action: new(InitNamespace),
Parallel: false,
}
t.Tasks = []task.Interface{
checkNodeState,
initNs,
}
}

View File

@@ -1,56 +0,0 @@
package plugins
import (
"fmt"
"path"
"github.com/beclab/Olares/cli/pkg/common"
cc "github.com/beclab/Olares/cli/pkg/core/common"
"github.com/beclab/Olares/cli/pkg/core/connector"
"github.com/beclab/Olares/cli/pkg/core/prepare"
"github.com/beclab/Olares/cli/pkg/core/task"
"github.com/beclab/Olares/cli/pkg/core/util"
)
type InstallMonitorDashboardCrd struct {
common.KubeAction
}
func (t *InstallMonitorDashboardCrd) Execute(runtime connector.Runtime) error {
var kubectlpath, err = util.GetCommand(common.CommandKubectl)
if err != nil {
return fmt.Errorf("kubectl not found")
}
var p = path.Join(runtime.GetInstallerDir(), cc.BuildFilesCacheDir, cc.BuildDir, "ks-monitor", "monitoring-dashboard")
var cmd = fmt.Sprintf("%s apply -f %s", kubectlpath, p)
if _, err := runtime.GetRunner().SudoCmd(cmd, false, true); err != nil {
return err
}
return nil
}
type CreateMonitorDashboardModule struct {
common.KubeModule
}
func (m *CreateMonitorDashboardModule) Init() {
m.Name = "CreateMonitorDashboardModule"
installMonitorDashboardCrd := &task.RemoteTask{
Name: "InstallMonitorDashboardCrd",
Hosts: m.Runtime.GetHostsByRole(common.Master),
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(NotEqualDesiredVersion),
},
Action: new(InstallMonitorDashboardCrd),
Parallel: false,
Retry: 0,
}
m.Tasks = []task.Interface{
installMonitorDashboardCrd,
}
}

View File

@@ -17,14 +17,8 @@
package plugins
import (
"fmt"
"path"
"strings"
"github.com/beclab/Olares/cli/pkg/common"
"github.com/beclab/Olares/cli/pkg/core/connector"
"github.com/beclab/Olares/cli/pkg/utils"
"github.com/pkg/errors"
)
type IsCloudInstance struct {
@@ -43,72 +37,3 @@ func (p *IsCloudInstance) PreCheck(runtime connector.Runtime) (bool, error) {
}
return p.Not, nil
}
type CheckStorageClass struct {
common.KubePrepare
}
func (p *CheckStorageClass) PreCheck(runtime connector.Runtime) (bool, error) {
var kubectlpath, _ = p.PipelineCache.GetMustString(common.CacheCommandKubectlPath)
if kubectlpath == "" {
kubectlpath = path.Join(common.BinDir, common.CommandKubectl)
}
var cmd = fmt.Sprintf("%s get sc | awk '{if(NR>1){print $1}}'", kubectlpath)
stdout, err := runtime.GetRunner().SudoCmd(cmd, false, true)
if err != nil {
return false, errors.Wrap(errors.WithStack(err), "get storageclass failed")
}
if stdout == "" {
return false, fmt.Errorf("no storageclass found")
}
cmd = fmt.Sprintf("%s get sc --no-headers", kubectlpath)
stdout, err = runtime.GetRunner().SudoCmd(cmd, false, true)
if err != nil {
return false, errors.Wrap(errors.WithStack(err), "get storageclass failed")
}
if stdout == "" {
return false, fmt.Errorf("no storageclass found")
}
if !strings.Contains(stdout, "(default)") {
return false, fmt.Errorf("default storageclass was not found")
}
return true, nil
}
type GenerateRedisPassword struct {
common.KubePrepare
}
func (p *GenerateRedisPassword) PreCheck(runtime connector.Runtime) (bool, error) {
pass, err := utils.GeneratePassword(15)
if err != nil {
return false, err
}
if pass == "" {
return false, fmt.Errorf("failed to generate redis password")
}
p.PipelineCache.Set(common.CacheRedisPassword, pass)
return true, nil
}
type NotEqualDesiredVersion struct {
common.KubePrepare
}
func (n *NotEqualDesiredVersion) PreCheck(runtime connector.Runtime) (bool, error) {
ksVersion, ok := n.PipelineCache.GetMustString(common.KubeSphereVersion)
if !ok {
ksVersion = ""
}
if n.KubeConf.Cluster.KubeSphere.Version == ksVersion {
return false, nil
}
return true, nil
}

View File

@@ -118,7 +118,6 @@ func (m *DeployPrometheusModule) Init() {
Hosts: m.Runtime.GetHostsByRole(common.Master),
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(NotEqualDesiredVersion),
},
Action: new(CreateOperator),
Parallel: false,
@@ -130,7 +129,6 @@ func (m *DeployPrometheusModule) Init() {
Hosts: m.Runtime.GetHostsByRole(common.Master),
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(NotEqualDesiredVersion),
},
Action: &CreatePrometheusComponent{
Component: "node-exporter",
@@ -145,7 +143,6 @@ func (m *DeployPrometheusModule) Init() {
Hosts: m.Runtime.GetHostsByRole(common.Master),
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(NotEqualDesiredVersion),
},
Action: &CreatePrometheusComponent{
Component: "kube-state-metrics",
@@ -160,7 +157,6 @@ func (m *DeployPrometheusModule) Init() {
Hosts: m.Runtime.GetHostsByRole(common.Master),
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(NotEqualDesiredVersion),
},
Action: &CreatePrometheusComponent{
Component: "prometheus",
@@ -173,7 +169,6 @@ func (m *DeployPrometheusModule) Init() {
Hosts: m.Runtime.GetHostsByRole(common.Master),
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(NotEqualDesiredVersion),
},
Action: &CreatePrometheusComponent{
Component: "kubernetes",
@@ -182,26 +177,12 @@ func (m *DeployPrometheusModule) Init() {
Parallel: false,
}
//createAlertManager := &task.RemoteTask{
// Name: "CreateAlertManager",
// Hosts: m.Runtime.GetHostsByRole(common.Master),
// Prepare: &prepare.PrepareCollection{
// new(common.OnlyFirstMaster),
// new(NotEqualDesiredVersion),
// },
// Action: &CreatePrometheusComponent{
// Component: "alertmanager",
// },
// Parallel: false,
//}
m.Tasks = []task.Interface{
createOperator,
createNodeExporter,
createKubeStateMetrics,
createPrometheus,
createKubeMonitor,
//createAlertManager,
}
}

View File

@@ -1,161 +0,0 @@
package plugins
import (
"context"
"fmt"
"path"
"strings"
"time"
"github.com/beclab/Olares/cli/pkg/common"
cc "github.com/beclab/Olares/cli/pkg/core/common"
"github.com/beclab/Olares/cli/pkg/core/connector"
"github.com/beclab/Olares/cli/pkg/core/logger"
"github.com/beclab/Olares/cli/pkg/core/prepare"
"github.com/beclab/Olares/cli/pkg/core/task"
"github.com/beclab/Olares/cli/pkg/core/util"
"github.com/beclab/Olares/cli/pkg/utils"
"github.com/pkg/errors"
ctrl "sigs.k8s.io/controller-runtime"
)
type CreateRedisSecret struct {
common.KubeAction
}
func (t *CreateRedisSecret) Execute(runtime connector.Runtime) error {
kubectlpath, err := util.GetCommand(common.CommandKubectl)
if err != nil {
return fmt.Errorf("kubectl not found")
}
redisPwd, ok := t.PipelineCache.Get(common.CacheRedisPassword)
if !ok {
return fmt.Errorf("get redis password from module cache failed")
}
if stdout, err := runtime.GetRunner().SudoCmd(fmt.Sprintf("%s -n %s create secret generic redis-secret --from-literal=auth=%s", kubectlpath, common.NamespaceKubesphereSystem, redisPwd), false, true); err != nil {
if err != nil && !strings.Contains(stdout, "already exists") {
return errors.Wrap(errors.WithStack(err), "create redis secret failed")
}
}
return nil
}
type BackupRedisManifests struct {
common.KubeAction
}
func (t *BackupRedisManifests) Execute(runtime connector.Runtime) error {
kubectlpath, err := util.GetCommand(common.CommandKubectl)
if err != nil {
return fmt.Errorf("kubectl not found")
}
rver, err := runtime.GetRunner().SudoCmd(fmt.Sprintf("%s get pod -n %s -l app=%s,tier=database,version=%s-4.0 | wc -l",
kubectlpath, common.NamespaceKubesphereSystem, common.ChartNameRedis, common.ChartNameRedis), false, false)
if err != nil || strings.Contains(rver, "No resources found") {
return nil
}
rver = strings.ReplaceAll(rver, "No resources found in kubesphere-system namespace.", "")
rver = strings.ReplaceAll(rver, "\r\n", "")
rver = strings.ReplaceAll(rver, "\n", "")
if rver != "0" {
var cmd = fmt.Sprintf("%s get svc -n %s %s -o yaml > %s/redis-svc-backup.yaml && %s delete svc -n %s %s",
kubectlpath,
common.NamespaceKubesphereSystem, common.ChartNameRedis,
common.KubeManifestDir, // todo need fix cross platforms
kubectlpath,
common.NamespaceKubesphereSystem, common.ChartNameRedis)
if _, err := runtime.GetRunner().SudoCmd(cmd, false, true); err != nil {
logger.Errorf("failed to backup %s svc: %v", common.ChartNameRedis, err)
return errors.Wrap(errors.WithStack(err), "backup redis svc failed")
}
}
return nil
}
type DeployRedis struct {
common.KubeAction
}
func (t *DeployRedis) Execute(runtime connector.Runtime) error {
config, err := ctrl.GetConfig()
if err != nil {
return err
}
var appName = common.ChartNameRedis
var appPath = path.Join(runtime.GetInstallerDir(), cc.BuildFilesCacheDir, cc.BuildDir, appName)
actionConfig, settings, err := utils.InitConfig(config, common.NamespaceKubesphereSystem)
if err != nil {
return err
}
var ctx, cancel = context.WithTimeout(context.Background(), 3*time.Minute)
defer cancel()
if err := utils.UpgradeCharts(ctx, actionConfig, settings, appName, appPath, "", common.NamespaceKubesphereSystem, nil, false); err != nil {
return err
}
return nil
}
// +++++
type DeployRedisModule struct {
common.KubeModule
}
func (m *DeployRedisModule) Init() {
m.Name = "DeployRedis"
createRedisSecret := &task.RemoteTask{
Name: "CreateRedisSecret",
Hosts: m.Runtime.GetHostsByRole(common.Master),
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(NotEqualDesiredVersion),
new(GenerateRedisPassword),
},
Action: new(CreateRedisSecret),
Parallel: false,
Retry: 0,
}
backupRedisManifests := &task.RemoteTask{
Name: "BackupRedisManifests",
Hosts: m.Runtime.GetHostsByRole(common.Master),
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(NotEqualDesiredVersion),
},
Action: new(BackupRedisManifests),
Parallel: false,
Retry: 0,
}
deployRedis := &task.RemoteTask{
Name: "DeployRedis",
Hosts: m.Runtime.GetHostsByRole(common.Master),
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(NotEqualDesiredVersion),
new(CheckStorageClass),
},
Action: new(DeployRedis),
Parallel: false,
Retry: 0,
}
m.Tasks = []task.Interface{
createRedisSecret,
backupRedisManifests,
deployRedis,
}
}

View File

@@ -1,16 +1,12 @@
package plugins
import (
"fmt"
"path"
"strings"
"github.com/beclab/Olares/cli/pkg/common"
cc "github.com/beclab/Olares/cli/pkg/core/common"
"github.com/beclab/Olares/cli/pkg/core/connector"
"github.com/beclab/Olares/cli/pkg/core/logger"
"github.com/beclab/Olares/cli/pkg/utils"
"github.com/pkg/errors"
)
type CopyEmbedFiles struct {
@@ -21,97 +17,3 @@ func (t *CopyEmbedFiles) Execute(runtime connector.Runtime) error {
var dst = path.Join(runtime.GetInstallerDir(), cc.BuildFilesCacheDir)
return utils.CopyEmbed(assets, ".", dst)
}
type CheckNodeState struct {
common.KubeAction
}
func (t *CheckNodeState) Execute(runtime connector.Runtime) error {
var kubectlpath, _ = t.PipelineCache.GetMustString(common.CacheCommandKubectlPath)
if kubectlpath == "" {
kubectlpath = path.Join(common.BinDir, common.CommandKubectl)
}
var cmd = fmt.Sprintf("%s get node --no-headers", kubectlpath)
stdout, err := runtime.GetRunner().SudoCmd(cmd, false, false)
if err != nil || stdout == "" {
return fmt.Errorf("Node Pending")
}
var nodeInfo = strings.Fields(stdout)
if len(nodeInfo) != 5 {
logger.Errorf("node info invalid: %s", stdout)
return fmt.Errorf("Node Pending")
}
var state = nodeInfo[1]
var version = nodeInfo[4]
if state != "Ready" {
return fmt.Errorf("Node Pending")
}
t.PipelineCache.Set(common.CacheKubeletVersion, version)
return nil
}
type InitNamespace struct {
common.KubeAction
}
func (t *InitNamespace) Execute(runtime connector.Runtime) error {
var kubectlpath, _ = t.PipelineCache.GetMustString(common.CacheCommandKubectlPath)
if kubectlpath == "" {
kubectlpath = path.Join(common.BinDir, common.CommandKubectl)
}
for _, ns := range []string{common.NamespaceKubesphereControlsSystem} {
if stdout, err := runtime.GetRunner().Cmd(fmt.Sprintf("%s create ns %s", kubectlpath, ns), false, true); err != nil {
if !strings.Contains(stdout, "already exists") {
logger.Errorf("create ns %s failed: %v", ns, err)
return errors.Wrap(errors.WithStack(err), fmt.Sprintf("create namespace %s failed: %v", ns, err))
}
}
}
// _, err := runtime.GetRunner().SudoCmd(
// fmt.Sprintf(`cat <<EOF | /usr/local/bin/kubectl apply -f -
// apiVersion: v1
// kind: Namespace
// metadata:
// name: %s
// ---
// apiVersion: v1
// kind: Namespace
// metadata:
// name: %s
// EOF
// `, common.NamespaceKubesphereControlsSystem, common.NamespaceKubesphereMonitoringFederated), false, true)
// if err != nil {
// return errors.Wrap(errors.WithStack(err), fmt.Sprintf("create namespace: %s and %s",
// common.NamespaceKubesphereControlsSystem, common.NamespaceKubesphereMonitoringFederated))
// }
var allNs = []string{
common.NamespaceDefault,
common.NamespaceKubeNodeLease,
common.NamespaceKubePublic,
common.NamespaceKubeSystem,
common.NamespaceKubekeySystem,
common.NamespaceKubesphereControlsSystem,
common.NamespaceKubesphereSystem,
}
for _, ns := range allNs {
if _, err := runtime.GetRunner().SudoCmd(fmt.Sprintf("%s label ns %s kubesphere.io/workspace=system-workspace --overwrite", kubectlpath, ns), false, true); err != nil {
logger.Errorf("label ns %s kubesphere.io/workspace=system-workspace failed: %v", ns, err)
return errors.Wrap(errors.WithStack(err), fmt.Sprintf("label namespace %s kubesphere.io/workspace=system-workspace failed: %v", ns, err))
}
if _, err := runtime.GetRunner().SudoCmd(fmt.Sprintf("%s label ns %s kubesphere.io/namespace=%s --overwrite", kubectlpath, ns, ns), false, true); err != nil {
logger.Errorf("label ns %s kubesphere.io/namespace=%s failed: %v", ns, ns, err)
return errors.Wrap(errors.WithStack(err), fmt.Sprintf("label namespace %s kubesphere.io/namespace=%s failed: %v", ns, ns, err))
}
}
return nil
}

View File

@@ -1,79 +0,0 @@
package plugins
import (
"fmt"
"path"
"strings"
"github.com/beclab/Olares/cli/pkg/common"
"github.com/beclab/Olares/cli/pkg/core/connector"
"github.com/beclab/Olares/cli/pkg/core/logger"
"github.com/beclab/Olares/cli/pkg/core/prepare"
"github.com/beclab/Olares/cli/pkg/core/task"
"github.com/beclab/Olares/cli/pkg/core/util"
"github.com/beclab/Olares/cli/pkg/utils"
"github.com/pkg/errors"
)
type GenerateKubeSphereToken struct {
common.KubeAction
}
func (t *GenerateKubeSphereToken) Execute(runtime connector.Runtime) error {
var kubectlpath, _ = t.PipelineCache.GetMustString(common.CacheCommandKubectlPath)
if kubectlpath == "" {
kubectlpath = path.Join(common.BinDir, common.CommandKubectl)
}
var random, err = utils.GeneratePassword(32)
if err != nil {
logger.Errorf("failed to generate password: %v", err)
return err
}
token, err := util.EncryptToken(random)
if err != nil {
return errors.Wrap(errors.WithStack(err), "create kubesphere token failed")
}
var cmd = fmt.Sprintf("%s get secrets -n %s --no-headers", kubectlpath, common.NamespaceKubesphereSystem)
stdout, _ := runtime.GetRunner().SudoCmd(cmd, false, false)
if strings.Contains(stdout, "kubesphere-secret") {
cmd = fmt.Sprintf("%s delete secrets -n %s kubesphere-secret", kubectlpath, common.NamespaceKubesphereSystem)
runtime.GetRunner().SudoCmd(cmd, false, true)
}
cmd = fmt.Sprintf("%s create secret generic kubesphere-secret --from-literal=token=%s --from-literal=secret=%s -n %s", kubectlpath,
token, random, common.NamespaceKubesphereSystem)
if _, err := runtime.GetRunner().SudoCmd(cmd, false, true); err != nil {
return errors.Wrap(errors.WithStack(err), "create kubesphere token failed")
}
t.PipelineCache.Set(common.CacheJwtSecret, random)
return nil
}
// +++++
type CreateKubeSphereSecretModule struct {
common.KubeModule
}
func (m *CreateKubeSphereSecretModule) Init() {
m.Name = "CreateKubeSphereSecret"
generateKubeSphereToken := &task.RemoteTask{
Name: "GenerateKubeSphereToken",
Hosts: m.Runtime.GetHostsByRole(common.Master),
Prepare: &prepare.PrepareCollection{
new(common.OnlyFirstMaster),
new(NotEqualDesiredVersion),
},
Action: new(GenerateKubeSphereToken),
Parallel: false,
Retry: 0,
}
m.Tasks = []task.Interface{generateKubeSphereToken}
}

View File

@@ -1,57 +0,0 @@
/*
Copyright 2021 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package kubesphere
import (
"github.com/beclab/Olares/cli/pkg/common"
"github.com/beclab/Olares/cli/pkg/core/connector"
"github.com/pkg/errors"
versionutil "k8s.io/apimachinery/pkg/util/version"
)
type VersionBelowV3 struct {
common.KubePrepare
}
func (v *VersionBelowV3) PreCheck(runtime connector.Runtime) (bool, error) {
versionStr, ok := v.PipelineCache.GetMustString(common.KubeSphereVersion)
if !ok {
return false, errors.New("get current kubesphere version failed by pipeline cache")
}
version := versionutil.MustParseSemantic(versionStr)
v300 := versionutil.MustParseSemantic("v3.0.0")
if v.KubeConf.Cluster.KubeSphere.Enabled && v.KubeConf.Cluster.KubeSphere.Version == "v3.0.0" && version.LessThan(v300) {
return true, nil
}
return false, nil
}
type NotEqualDesiredVersion struct {
common.KubePrepare
}
func (n *NotEqualDesiredVersion) PreCheck(runtime connector.Runtime) (bool, error) {
ksVersion, ok := n.PipelineCache.GetMustString(common.KubeSphereVersion)
if !ok {
ksVersion = ""
}
if n.KubeConf.Cluster.KubeSphere.Version == ksVersion {
return false, nil
}
return true, nil
}

View File

@@ -18,18 +18,12 @@ package kubesphere
import (
"fmt"
"os"
"path"
"path/filepath"
"strings"
kubekeyapiv1alpha2 "github.com/beclab/Olares/cli/apis/kubekey/v1alpha2"
"github.com/beclab/Olares/cli/pkg/common"
"github.com/beclab/Olares/cli/pkg/core/connector"
"github.com/beclab/Olares/cli/pkg/core/logger"
"github.com/beclab/Olares/cli/pkg/core/util"
"github.com/beclab/Olares/cli/pkg/version/kubesphere"
"github.com/beclab/Olares/cli/pkg/version/kubesphere/templates"
"github.com/pkg/errors"
)
@@ -52,27 +46,6 @@ func (d *DeleteKubeSphereCaches) Execute(runtime connector.Runtime) error {
return nil
}
type AddInstallerConfig struct {
common.KubeAction
}
func (a *AddInstallerConfig) Execute(runtime connector.Runtime) error {
//var ksFilename string
// if runtime.GetSystemInfo().IsDarwin() {
// ksFilename = path.Join(common.TmpDir, "/etc/kubernetes/addons/kubesphere.yaml")
// } else {
//ksFilename = "/etc/kubernetes/addons/kubesphere.yaml"
//// }
//configurationBase64 := base64.StdEncoding.EncodeToString([]byte(a.KubeConf.Cluster.KubeSphere.Configurations))
//if _, err := runtime.GetRunner().SudoCmd(
// fmt.Sprintf("echo %s | base64 -d >> %s", configurationBase64, ksFilename),
// false, false); err != nil {
// return errors.Wrap(errors.WithStack(err), "add config to ks-installer manifests failed")
//}
return nil
}
type CreateNamespace struct {
common.KubeAction
}
@@ -91,6 +64,11 @@ metadata:
---
apiVersion: v1
kind: Namespace
metadata:
name: kubesphere-controls-system
---
apiVersion: v1
kind: Namespace
metadata:
name: kubesphere-monitoring-system
EOF`, kubectl)
@@ -101,235 +79,6 @@ EOF`, kubectl)
return nil
}
type Setup struct {
common.KubeAction
}
func (s *Setup) Execute(runtime connector.Runtime) error {
nodeIp, _ := s.PipelineCache.GetMustString(common.CacheMinikubeNodeIp)
filePath := filepath.Join(common.KubeAddonsDir, templates.KsInstaller.Name())
var minikubepath, ok = s.PipelineCache.GetMustString(common.CacheCommandMinikubePath)
if !ok || minikubepath == "" {
minikubepath = path.Join(common.BinDir, common.CommandMinikube)
}
kubectlpath, ok := s.PipelineCache.GetMustString(common.CacheCommandKubectlPath)
if !ok || kubectlpath == "" {
kubectlpath = path.Join(common.BinDir, common.CommandKubectl)
}
var addrList []string
var tlsDisable bool
var port string
switch s.KubeConf.Cluster.Etcd.Type {
case kubekeyapiv1alpha2.KubeKey:
for _, host := range runtime.GetHostsByRole(common.ETCD) {
addrList = append(addrList, host.GetInternalAddress())
}
caFile := "/etc/ssl/etcd/ssl/ca.pem"
certFile := fmt.Sprintf("/etc/ssl/etcd/ssl/node-%s.pem", runtime.RemoteHost().GetName())
keyFile := fmt.Sprintf("/etc/ssl/etcd/ssl/node-%s-key.pem", runtime.RemoteHost().GetName())
if output, err := runtime.GetRunner().SudoCmd(
fmt.Sprintf("/usr/local/bin/kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs "+
"--from-file=etcd-client-ca.crt=%s "+
"--from-file=etcd-client.crt=%s "+
"--from-file=etcd-client.key=%s", caFile, certFile, keyFile), false, false); err != nil {
if !strings.Contains(output, "exists") {
return err
}
}
case kubekeyapiv1alpha2.MiniKube:
var etcdPath = common.KubeEtcdCertDir // path.Join(common.TmpDir, common.KubeEtcdCertDir)
if !util.IsExist(etcdPath) {
if _, err := runtime.GetRunner().SudoCmd(fmt.Sprintf("mkdir -p %s", etcdPath), false, false); err != nil {
return err
}
}
var certfiles = []string{
"ca.crt",
"server.crt",
"server.key",
}
for _, certfile := range certfiles {
var cfile = path.Join(common.MinikubeEtcdCertDir, certfile)
var cmd = fmt.Sprintf("%s -p %s ssh sudo chmod 644 %s && minikube -p %s cp %s:%s %s", minikubepath,
runtime.RemoteHost().GetMinikubeProfile(), cfile,
runtime.RemoteHost().GetMinikubeProfile(), runtime.RemoteHost().GetMinikubeProfile(),
cfile, path.Join(etcdPath, certfile))
if _, err := runtime.GetRunner().SudoCmd(cmd, false, false); err != nil {
return err
}
}
caFile := path.Join(etcdPath, "ca.crt")
certFile := path.Join(etcdPath, "server.crt")
keyFile := path.Join(etcdPath, "server.key")
addrList = append(addrList, nodeIp)
if output, err := runtime.GetRunner().SudoCmd(
fmt.Sprintf("%s -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs "+
"--from-file=%s "+
"--from-file=%s "+
"--from-file=%s", kubectlpath, caFile, certFile, keyFile), false, false); err != nil {
if !strings.Contains(output, "already exists") {
return err
}
}
//path.Join(common.TmpDir, filepath.Join(common.KubeAddonsDir, templates.KsInstaller.Name()))
filePath = path.Join(filepath.Join(common.KubeAddonsDir, templates.KsInstaller.Name()))
case kubekeyapiv1alpha2.Kubeadm:
for _, host := range runtime.GetHostsByRole(common.Master) {
addrList = append(addrList, host.GetInternalAddress())
}
caFile := "/etc/kubernetes/pki/etcd/ca.crt"
certFile := "/etc/kubernetes/pki/etcd/healthcheck-client.crt"
keyFile := "/etc/kubernetes/pki/etcd/healthcheck-client.key"
if output, err := runtime.GetRunner().SudoCmd(
fmt.Sprintf("/usr/local/bin/kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs "+
"--from-file=etcd-client-ca.crt=%s "+
"--from-file=etcd-client.crt=%s "+
"--from-file=etcd-client.key=%s", caFile, certFile, keyFile), false, false); err != nil {
if !strings.Contains(output, "exists") {
return err
}
}
case kubekeyapiv1alpha2.External:
for _, endpoint := range s.KubeConf.Cluster.Etcd.External.Endpoints {
e := strings.Split(strings.TrimSpace(endpoint), "://")
s := strings.Split(e[1], ":")
port = s[1]
addrList = append(addrList, s[0])
if e[0] == "http" {
tlsDisable = true
}
}
if tlsDisable {
if output, err := runtime.GetRunner().SudoCmd("/usr/local/bin/kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs", true, false); err != nil {
if !strings.Contains(output, "exists") {
return err
}
}
} else {
caFile := fmt.Sprintf("/etc/ssl/etcd/ssl/%s", filepath.Base(s.KubeConf.Cluster.Etcd.External.CAFile))
certFile := fmt.Sprintf("/etc/ssl/etcd/ssl/%s", filepath.Base(s.KubeConf.Cluster.Etcd.External.CertFile))
keyFile := fmt.Sprintf("/etc/ssl/etcd/ssl/%s", filepath.Base(s.KubeConf.Cluster.Etcd.External.KeyFile))
if output, err := runtime.GetRunner().SudoCmd(
fmt.Sprintf("/usr/local/bin/kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs "+
"--from-file=etcd-client-ca.crt=%s "+
"--from-file=etcd-client.crt=%s "+
"--from-file=etcd-client.key=%s", caFile, certFile, keyFile), true, false); err != nil {
if !strings.Contains(output, "exists") {
return err
}
}
}
}
var sedCommand = runtime.GetCommandSed()
etcdEndPoint := strings.Join(addrList, ",")
var cmdEndpoint = fmt.Sprintf("%s '/endpointIps/s/\\:.*/\\: %s/g' %s", sedCommand, etcdEndPoint, filePath)
if _, err := runtime.GetRunner().SudoCmd(cmdEndpoint, false, false); err != nil {
return errors.Wrap(errors.WithStack(err), fmt.Sprintf("update etcd endpoint failed"))
}
if tlsDisable {
if _, err := runtime.GetRunner().SudoCmd(
fmt.Sprintf("%s '/tlsEnable/s/\\:.*/\\: false/g' %s", sedCommand, filePath),
false, false); err != nil {
return errors.Wrap(errors.WithStack(err), fmt.Sprintf("update etcd tls failed"))
}
}
if len(port) != 0 {
if _, err := runtime.GetRunner().SudoCmd(
fmt.Sprintf("%s 's/2379/%s/g' %s", sedCommand, port, filePath),
false, false); err != nil {
return errors.Wrap(errors.WithStack(err), fmt.Sprintf("update etcd tls failed"))
}
}
if s.KubeConf.Cluster.Registry.PrivateRegistry != "" {
PrivateRegistry := strings.Replace(s.KubeConf.Cluster.Registry.PrivateRegistry, "/", "\\/", -1)
if _, err := runtime.GetRunner().SudoCmd(
fmt.Sprintf("%s '/local_registry/s/\\:.*/\\: %s/g' %s", sedCommand, PrivateRegistry, filePath),
false, false); err != nil {
return errors.Wrap(errors.WithStack(err), fmt.Sprintf("add private registry: %s failed", s.KubeConf.Cluster.Registry.PrivateRegistry))
}
} else {
if _, err := runtime.GetRunner().SudoCmd(fmt.Sprintf("%s '/local_registry/d' %s", sedCommand, filePath), false, false); err != nil {
return errors.Wrap(errors.WithStack(err), fmt.Sprintf("remove private registry failed"))
}
}
if s.KubeConf.Cluster.Registry.NamespaceOverride != "" {
if _, err := runtime.GetRunner().SudoCmd(
fmt.Sprintf("%s '/namespace_override/s/\\:.*/\\: %s/g' %s", sedCommand, s.KubeConf.Cluster.Registry.NamespaceOverride, filePath),
false, false); err != nil {
return errors.Wrap(errors.WithStack(err), fmt.Sprintf("add namespace override: %s failed", s.KubeConf.Cluster.Registry.NamespaceOverride))
}
} else {
if _, err := runtime.GetRunner().SudoCmd(fmt.Sprintf("%s '/namespace_override/d' %s", sedCommand, filePath), false, false); err != nil {
return errors.Wrap(errors.WithStack(err), fmt.Sprintf("remove namespace override failed"))
}
}
_, ok = kubesphere.CNSource[s.KubeConf.Cluster.KubeSphere.Version]
if ok && (os.Getenv("KKZONE") == "cn" || s.KubeConf.Cluster.Registry.PrivateRegistry == "registry.cn-beijing.aliyuncs.com") {
if _, err := runtime.GetRunner().SudoCmd(
fmt.Sprintf("%s '/zone/s/\\:.*/\\: %s/g' %s", sedCommand, "cn", filePath),
false, false); err != nil {
return errors.Wrap(errors.WithStack(err), fmt.Sprintf("add kubekey zone: %s failed", s.KubeConf.Cluster.Registry.PrivateRegistry))
}
} else {
if _, err := runtime.GetRunner().SudoCmd(
fmt.Sprintf("%s '/zone/d' %s", sedCommand, filePath),
false, false); err != nil {
return errors.Wrap(errors.WithStack(err), fmt.Sprintf("remove kubekey zone failed"))
}
}
switch s.KubeConf.Cluster.Kubernetes.ContainerManager {
case "docker", "containerd", "crio":
if _, err := runtime.GetRunner().SudoCmd(
fmt.Sprintf("%s '/containerruntime/s/\\:.*/\\: %s/g' %s", sedCommand, s.KubeConf.Cluster.Kubernetes.ContainerManager, filePath), false, false); err != nil {
return errors.Wrap(errors.WithStack(err), fmt.Sprintf("set container runtime: %s failed", s.KubeConf.Cluster.Kubernetes.ContainerManager))
}
default:
logger.Infof(
fmt.Sprintf("%s Currently, the logging module of KubeSphere does not support %s. If %s is used, the logging module will be unavailable.", runtime.RemoteHost().GetName(),
s.KubeConf.Cluster.Kubernetes.ContainerManager, s.KubeConf.Cluster.Kubernetes.ContainerManager))
}
return nil
}
type Apply struct {
common.KubeAction
}
func (a *Apply) Execute(runtime connector.Runtime) error {
var kubectlpath, ok = a.PipelineCache.GetMustString(common.CacheCommandKubectlPath)
if !ok || kubectlpath == "" {
kubectlpath = path.Join(common.BinDir, common.CommandKubectl)
}
filePath := filepath.Join(common.KubeAddonsDir, templates.KsInstaller.Name())
// if runtime.GetSystemInfo().IsDarwin() {
// filePath = path.Join(common.TmpDir, filePath)
// }
deployKubesphereCmd := fmt.Sprintf("%s apply -f %s --force", kubectlpath, filePath)
if _, err := runtime.GetRunner().Cmd(deployKubesphereCmd, false, true); err != nil {
return errors.Wrapf(errors.WithStack(err), "deploy %s failed", filePath)
}
return nil
}
type GetKubeCommand struct {
common.KubeAction
}

View File

@@ -1,109 +0,0 @@
/*
Copyright 2021 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v2
type V2 struct {
Persistence Persistence `yaml:"persistence"`
Common Common `yaml:"common"`
Etcd Etcd `yaml:"etcd"`
MetricsServerOld MetricsServerOld `yaml:"metrics-server"`
MetricsServerNew MetricsServerNew `yaml:"metrics_server"`
Console Console `yaml:"console"`
Monitoring Monitoring `yaml:"monitoring"`
Logging Logging `yaml:"logging"`
Openpitrix Openpitrix `yaml:"openpitrix"`
Devops Devops `yaml:"devops"`
Servicemesh Servicemesh `yaml:"servicemesh"`
Notification Notification `yaml:"notification"`
Alerting Alerting `yaml:"alerting"`
LocalRegistry string `yaml:"local_registry"`
}
type Persistence struct {
StorageClass string `yaml:"storageClass"`
}
type Etcd struct {
Monitoring bool `yaml:"monitoring"`
EndpointIps string `yaml:"endpointIps"`
Port int `yaml:"port"`
TlsEnable bool `yaml:"tlsEnable"`
}
type Common struct {
MysqlVolumeSize string `yaml:"mysqlVolumeSize"`
MinioVolumeSize string `yaml:"minioVolumeSize"`
EtcdVolumeSize string `yaml:"etcdVolumeSize"`
OpenldapVolumeSize string `yaml:"openldapVolumeSize"`
RedisVolumSize string `yaml:"redisVolumSize"`
}
type MetricsServerOld struct {
Enabled string `yaml:"enabled"`
}
type MetricsServerNew struct {
Enabled string `yaml:"enabled"`
}
type Console struct {
EnableMultiLogin bool `yaml:"enableMultiLogin"`
Port int `yaml:"port"`
}
type Monitoring struct {
PrometheusReplicas int `yaml:"prometheusReplicas"`
PrometheusMemoryRequest string `yaml:"prometheusMemoryRequest"`
PrometheusVolumeSize string `yaml:"prometheusVolumeSize"`
}
type Logging struct {
Enabled bool `yaml:"enabled"`
ElasticsearchMasterReplicas int `yaml:"elasticsearchMasterReplicas"`
ElasticsearchDataReplicas int `yaml:"elasticsearchDataReplicas"`
LogsidecarReplicas int `yaml:"logsidecarReplicas"`
ElasticsearchVolumeSize string `yaml:"elasticsearchVolumeSize"`
ElasticsearchMasterVolumeSize string `yaml:"elasticsearchMasterVolumeSize"`
ElasticsearchDataVolumeSize string `yaml:"elasticsearchDataVolumeSize"`
LogMaxAge int `yaml:"logMaxAge"`
ElkPrefix string `yaml:"elkPrefix"`
}
type Openpitrix struct {
Enabled bool `yaml:"enabled"`
}
type Devops struct {
Enabled bool `yaml:"enabled"`
JenkinsMemoryLim string `yaml:"jenkinsMemoryLim"`
JenkinsMemoryReq string `yaml:"jenkinsMemoryReq"`
JenkinsVolumeSize string `yaml:"jenkinsVolumeSize"`
JenkinsjavaoptsXms string `yaml:"jenkinsJavaOpts_Xms"`
JenkinsjavaoptsXmx string `yaml:"jenkinsJavaOpts_Xmx"`
JenkinsjavaoptsMaxram string `yaml:"jenkinsJavaOpts_MaxRAM"`
}
type Servicemesh struct {
Enabled bool `yaml:"enabled"`
}
type Notification struct {
Enabled bool `yaml:"enabled"`
}
type Alerting struct {
Enabled bool `yaml:"enabled"`
}

View File

@@ -1,160 +0,0 @@
/*
Copyright 2021 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v3
type ClusterConfig struct {
ApiVersion string `yaml:"apiVersion"`
Kind string `yaml:"kind"`
Metadata Metadata `yaml:"metadata"`
Spec *V3 `yaml:"spec"`
}
type Metadata struct {
Name string `yaml:"name"`
Namespace string `yaml:"namespace"`
Label Label `yaml:"labels"`
}
type Label struct {
Version string `yaml:"version"`
}
type V3 struct {
Persistence Persistence `yaml:"persistence"`
Authentication Authentication `yaml:"authentication"`
Common Common `yaml:"common"`
Etcd Etcd `yaml:"etcd"`
MetricsServer MetricsServer `yaml:"metrics_server"`
Console Console `yaml:"console"`
Monitoring Monitoring `yaml:"monitoring"`
Logging Logging `yaml:"logging"`
Openpitrix Openpitrix `yaml:"openpitrix"`
Devops Devops `yaml:"devops"`
Servicemesh Servicemesh `yaml:"servicemesh"`
Notification Notification `yaml:"notification"`
Alerting Alerting `yaml:"alerting"`
Auditing Auditing `yaml:"auditing"`
Events Events `yaml:"events"`
Multicluster Multicluster `yaml:"multicluster"`
Networkpolicy Networkpolicy `yaml:"networkpolicy"`
LocalRegistry string `yaml:"local_registry"`
}
type Persistence struct {
StorageClass string `yaml:"storageClass"`
}
type MetricsServer struct {
Enabled bool `yaml:"enabled"`
}
type Authentication struct {
JwtSecret string `yaml:"jwtSecret"`
}
type Etcd struct {
Monitoring bool `yaml:"monitoring"`
EndpointIps string `yaml:"endpointIps"`
Port int `yaml:"port"`
TlsEnable bool `yaml:"tlsEnable"`
}
type Common struct {
MysqlVolumeSize string `yaml:"mysqlVolumeSize"`
MinioVolumeSize string `yaml:"minioVolumeSize"`
EtcdVolumeSize string `yaml:"etcdVolumeSize"`
OpenldapVolumeSize string `yaml:"openldapVolumeSize"`
RedisVolumSize string `yaml:"redisVolumSize"`
ES ES `yaml:"es"`
}
type ES struct {
//ElasticsearchMasterReplicas int `yaml:"elasticsearchMasterReplicas"`
//ElasticsearchDataReplicas int `yaml:"elasticsearchDataReplicas"`
ElasticsearchMasterVolumeSize string `yaml:"elasticsearchMasterVolumeSize"`
ElasticsearchDataVolumeSize string `yaml:"elasticsearchDataVolumeSize"`
LogMaxAge int `yaml:"logMaxAge"`
ElkPrefix string `yaml:"elkPrefix"`
}
type Console struct {
EnableMultiLogin bool `yaml:"enableMultiLogin"`
Port int `yaml:"port"`
}
type Alerting struct {
Enabled bool `yaml:"enabled"`
}
type Auditing struct {
Enabled bool `yaml:"enabled"`
}
type Devops struct {
Enabled bool `yaml:"enabled"`
JenkinsMemoryLim string `yaml:"jenkinsMemoryLim"`
JenkinsMemoryReq string `yaml:"jenkinsMemoryReq"`
JenkinsVolumeSize string `yaml:"jenkinsVolumeSize"`
JenkinsjavaoptsXms string `yaml:"jenkinsJavaOpts_Xms"`
JenkinsjavaoptsXmx string `yaml:"jenkinsJavaOpts_Xmx"`
JenkinsjavaoptsMaxram string `yaml:"jenkinsJavaOpts_MaxRAM"`
}
type Events struct {
Enabled bool `yaml:"enabled"`
Ruler Ruler `yaml:"ruler"`
}
type Ruler struct {
Enabled bool `yaml:"enabled"`
Replicas int `yaml:"replicas"`
}
type Logging struct {
Enabled bool `yaml:"enabled"`
LogsidecarReplicas int `yaml:"logsidecarReplicas"`
}
type Metrics struct {
Enabled bool `yaml:"enabled"`
}
type Monitoring struct {
//AlertmanagerReplicas int `yaml:"alertmanagerReplicas"`
//PrometheusReplicas int `yaml:"prometheusReplicas"`
PrometheusMemoryRequest string `yaml:"prometheusMemoryRequest"`
PrometheusVolumeSize string `yaml:"prometheusVolumeSize"`
}
type Multicluster struct {
ClusterRole string `yaml:"clusterRole"`
}
type Networkpolicy struct {
Enabled bool `yaml:"enabled"`
}
type Notification struct {
Enabled bool `yaml:"enabled"`
}
type Openpitrix struct {
Enabled bool `yaml:"enabled"`
}
type Servicemesh struct {
Enabled bool `yaml:"enabled"`
}

View File

@@ -37,12 +37,7 @@ func NewDarwinClusterPhase(runtime *common.KubeRuntime, manifestMap manifest.Ins
},
&kubesphere.DeployMiniKubeModule{},
&kubesphere.DeployModule{Skip: !runtime.Cluster.KubeSphere.Enabled},
&ksplugins.DeployKsPluginsModule{},
//&ksplugins.DeployRedisModule{},
&ksplugins.CreateKubeSphereSecretModule{},
&ksplugins.DeployKsCoreConfigModule{}, // ks-core-config
&ksplugins.CreateMonitorDashboardModule{},
//&ksplugins.CreateNotificationModule{},
&ksplugins.DeployPrometheusModule{},
&ksplugins.DeployKsCoreModule{},
&kubesphere.CheckResultModule{Skip: !runtime.Cluster.KubeSphere.Enabled},
@@ -94,13 +89,8 @@ func NewK3sCreateClusterPhase(runtime *common.KubeRuntime, manifestMap manifest.
&certs.AutoRenewCertsModule{Skip: !runtime.Cluster.Kubernetes.EnableAutoRenewCerts()},
&k3s.SaveKubeConfigModule{},
&storage.DeployLocalVolumeModule{Skip: skipLocalStorage},
&kubesphere.DeployModule{Skip: !runtime.Cluster.KubeSphere.Enabled}, //
&ksplugins.DeployKsPluginsModule{},
//&ksplugins.DeployRedisModule{},
&ksplugins.CreateKubeSphereSecretModule{},
&ksplugins.DeployKsCoreConfigModule{}, // ks-core-config
&ksplugins.CreateMonitorDashboardModule{},
//&ksplugins.CreateNotificationModule{},
&kubesphere.DeployModule{Skip: !runtime.Cluster.KubeSphere.Enabled},
&ksplugins.DeployKsCoreConfigModule{},
&ksplugins.DeployPrometheusModule{},
&ksplugins.DeployKsCoreModule{},
&kubesphere.CheckResultModule{Skip: !runtime.Cluster.KubeSphere.Enabled},
@@ -157,12 +147,7 @@ func NewCreateClusterPhase(runtime *common.KubeRuntime, manifestMap manifest.Ins
&kubernetes.SaveKubeConfigModule{},
&storage.DeployLocalVolumeModule{Skip: skipLocalStorage},
&kubesphere.DeployModule{Skip: !runtime.Cluster.KubeSphere.Enabled},
&ksplugins.DeployKsPluginsModule{},
//&ksplugins.DeployRedisModule{},
&ksplugins.CreateKubeSphereSecretModule{},
&ksplugins.DeployKsCoreConfigModule{}, // ! ks-core-config
&ksplugins.CreateMonitorDashboardModule{},
//&ksplugins.CreateNotificationModule{},
&ksplugins.DeployPrometheusModule{},
&ksplugins.DeployKsCoreModule{},
&kubesphere.CheckResultModule{Skip: !runtime.Cluster.KubeSphere.Enabled}, // check ks-apiserver phase

View File

@@ -25,7 +25,6 @@ func CliInstallTerminusPipeline(opts *options.CliTerminusInstallOptions) error {
arg.SetOlaresVersion(opts.Version)
arg.SetMinikubeProfile(opts.MiniKubeProfile)
arg.SetStorage(getStorageValueFromEnv())
arg.SetTokenMaxAge()
arg.SetSwapConfig(opts.SwapConfig)
if err := arg.SwapConfig.Validate(); err != nil {
return err

View File

@@ -35,7 +35,6 @@ func PrepareSystemPipeline(opts *options.CliPrepareSystemOptions, components []s
arg.SetOlaresVersion(opts.Version)
arg.SetRegistryMirrors(opts.RegistryMirrors)
arg.SetStorage(getStorageValueFromEnv())
arg.SetTokenMaxAge()
runtime, err := common.NewKubeRuntime(common.AllInOne, *arg)
if err != nil {

View File

@@ -4,17 +4,18 @@ import (
"bufio"
"crypto/md5"
"fmt"
"github.com/Masterminds/semver/v3"
dockerref "github.com/containerd/containerd/reference/docker"
"io"
"net/http"
"os"
"os/exec"
"path/filepath"
"regexp"
"sigs.k8s.io/kustomize/kyaml/yaml"
"sort"
"strings"
"github.com/Masterminds/semver/v3"
dockerref "github.com/containerd/containerd/reference/docker"
"sigs.k8s.io/kustomize/kyaml/yaml"
)
type OlaresManifest struct {
@@ -232,6 +233,10 @@ func (m *Manager) scan() error {
image := strings.TrimSpace(strings.TrimPrefix(strings.TrimSpace(line), "image:"))
image = strings.Trim(image, "'")
image = strings.Trim(image, "\"")
// filter out dummy placeholder image names
if strings.EqualFold(strings.TrimSpace(image), "nonexisting") {
continue
}
images = append(images, image)
}
}

View File

@@ -16,7 +16,6 @@ import (
"github.com/beclab/Olares/cli/pkg/core/logger"
"github.com/beclab/Olares/cli/pkg/storage"
"github.com/beclab/Olares/cli/pkg/clientset"
"github.com/beclab/Olares/cli/pkg/common"
cc "github.com/beclab/Olares/cli/pkg/core/common"
"github.com/beclab/Olares/cli/pkg/core/connector"
@@ -235,50 +234,6 @@ func (p *Patch) Execute(runtime connector.Runtime) error {
return errors.Wrap(errors.WithStack(err), "patch globalrole workspace manager failed")
}
//var notificationManager = path.Join(runtime.GetInstallerDir(), "deploy", "patch-notification-manager.yaml")
//if _, err = runtime.GetRunner().SudoCmd(fmt.Sprintf("%s apply -f %s", kubectl, notificationManager), false, true); err != nil {
// return errors.Wrap(errors.WithStack(err), "patch notification manager failed")
//}
//var notificationManager = path.Join(runtime.GetInstallerDir(), "deploy", "patch-notification-manager.yaml")
//if _, err = runtime.GetRunner().Host.SudoCmd(fmt.Sprintf("%s apply -f %s", kubectl, notificationManager), false, true); err != nil {
// return errors.Wrap(errors.WithStack(err), "patch notification manager failed")
//}
//
//patchAdminContent := `{"metadata":{"finalizers":["finalizers.kubesphere.io/users"]}}`
//patchAdminCMD := fmt.Sprintf(
// "%s patch user admin -p '%s' --type='merge' ",
// kubectl,
// patchAdminContent)
//_, err = runtime.GetRunner().SudoCmd(patchAdminCMD, false, true)
//if err != nil {
// return errors.Wrap(errors.WithStack(err), "patch user admin failed")
//}
//patchAdminContent := "{\\\"metadata\\\":{\\\"finalizers\\\":[\\\"finalizers.kubesphere.io/users\\\"]}}"
//patchAdminCMD := fmt.Sprintf(
// "%s patch user admin -p '%s' --type='merge' ",
// kubectl,
// patchAdminContent)
//_, err = runtime.GetRunner().Host.SudoCmd(patchAdminCMD, false, true)
//if err != nil {
// return errors.Wrap(errors.WithStack(err), "patch user admin failed")
//}
//deleteAdminCMD := fmt.Sprintf("%s delete user admin --ignore-not-found", kubectl)
//_, err = runtime.GetRunner().SudoCmd(deleteAdminCMD, false, true)
//if err != nil {
// return errors.Wrap(errors.WithStack(err), "failed to delete ks admin user")
//}
deleteKubectlAdminCMD := fmt.Sprintf("%s -n kubesphere-controls-system delete deploy kubectl-admin --ignore-not-found", kubectl)
_, err = runtime.GetRunner().SudoCmd(deleteKubectlAdminCMD, false, true)
if err != nil {
return errors.Wrap(errors.WithStack(err), "failed to delete ks kubectl admin deployment")
}
deleteHTTPBackendCMD := fmt.Sprintf("%s -n kubesphere-controls-system delete deploy default-http-backend --ignore-not-found", kubectl)
_, err = runtime.GetRunner().SudoCmd(deleteHTTPBackendCMD, false, true)
if err != nil {
return errors.Wrap(errors.WithStack(err), "failed to delete ks default http backend")
}
patchFelixConfigContent := `{"spec":{"featureDetectOverride": "SNATFullyRandom=false,MASQFullyRandom=false"}}`
patchFelixConfigCMD := fmt.Sprintf(
"%s patch felixconfiguration default -p '%s' --type='merge'",
@@ -466,19 +421,6 @@ func cloudValue(cloudInstance bool) string {
return ""
}
func getRedisPassword(client clientset.Client, runtime connector.Runtime) (string, error) {
secret, err := client.Kubernetes().CoreV1().Secrets(common.NamespaceKubesphereSystem).Get(context.Background(), "redis-secret", metav1.GetOptions{})
if err != nil {
return "", errors.Wrap(errors.WithStack(err), "get redis secret failed")
}
if secret == nil || secret.Data == nil || secret.Data["auth"] == nil {
return "", fmt.Errorf("redis secret not found")
}
return string(secret.Data["auth"]), nil
}
type UserEnvConfig struct {
APIVersion string `yaml:"apiVersion"`
UserEnvs []v1alpha1.EnvVarSpec `yaml:"userEnvs"`

View File

@@ -1,22 +0,0 @@
package terminus
import (
"github.com/beclab/Olares/cli/pkg/common"
"github.com/beclab/Olares/cli/pkg/core/connector"
)
type NotEqualDesiredVersion struct {
common.KubePrepare
}
func (n *NotEqualDesiredVersion) PreCheck(runtime connector.Runtime) (bool, error) {
ksVersion, ok := n.PipelineCache.GetMustString(common.KubeSphereVersion)
if !ok {
ksVersion = ""
}
if n.KubeConf.Cluster.KubeSphere.Version == ksVersion {
return false, nil
}
return true, nil
}

View File

@@ -71,11 +71,10 @@ func upgradeKSCore() []task.Interface {
Action: new(plugins.CopyEmbedFiles),
},
&task.LocalTask{
Name: "UpgradeKSCore",
Prepare: new(common.GetMasterNum),
Action: new(plugins.CreateKsCore),
Retry: 10,
Delay: 10 * time.Second,
Name: "UpgradeKSCore",
Action: new(plugins.CreateKsCore),
Retry: 10,
Delay: 10 * time.Second,
},
&task.LocalTask{
Name: "CheckKSCoreRunning",

View File

@@ -192,7 +192,8 @@ func CheckJWS(jws string, duration int64) (*CheckJWSResult, error) {
// Check timestamp
now := time.Now().UnixMilli()
if now-timestamp > duration {
diff := now - timestamp
if max(diff, -diff) > duration {
return nil, fmt.Errorf("timestamp is out of range")
}

View File

@@ -488,7 +488,6 @@ func (i *InstallTerminus) Execute(runtime connector.Runtime) error {
var envs = []string{
fmt.Sprintf("export %s=%s", common.ENV_KUBE_TYPE, i.KubeConf.Arg.Kubetype),
fmt.Sprintf("export %s=%s", common.ENV_REGISTRY_MIRRORS, i.KubeConf.Arg.RegistryMirrors),
fmt.Sprintf("export %s=%d", common.ENV_TOKEN_MAX_AGE, i.KubeConf.Arg.TokenMaxAge),
fmt.Sprintf("export %s=%s", common.ENV_PREINSTALL, os.Getenv(common.ENV_PREINSTALL)),
fmt.Sprintf("export %s=%s", common.ENV_HOST_IP, systemInfo.GetLocalIp()),
fmt.Sprintf("export %s=%s", common.ENV_DISABLE_HOST_IP_PROMPT, os.Getenv(common.ENV_DISABLE_HOST_IP_PROMPT)),

View File

@@ -30,7 +30,6 @@ type proxyServer struct {
func NewProxyServer() (*proxyServer, error) {
p := &proxyServer{
proxy: echo.New(),
dnsServer: "10.233.0.3:53", // default k8s dns service
}
return p, nil
@@ -38,6 +37,18 @@ func NewProxyServer() (*proxyServer, error) {
func (p *proxyServer) Start() error {
klog.Info("Starting intranet proxy server...")
if p.proxy != nil {
err := p.proxy.Close()
if err != nil {
klog.Error("close intranet proxy server error, ", err)
return err
}
p.proxy = nil
}
// closed echo proxy server cannot be restarted, so create a new one
p.proxy = echo.New()
config := middleware.DefaultProxyConfig
config.Balancer = p
config.Transport = p.initTransport()
@@ -109,8 +120,12 @@ func (p *proxyServer) Start() error {
func (p *proxyServer) Close() error {
if p.proxy != nil {
return p.proxy.Close()
err := p.proxy.Close()
if err != nil {
klog.Error("close intranet proxy server error, ", err)
}
}
p.proxy = nil
p.stopped = true
return nil
}

View File

@@ -5,12 +5,14 @@ import (
"fmt"
"os/exec"
"strings"
"time"
"github.com/beclab/Olares/daemon/internel/intranet"
"github.com/beclab/Olares/daemon/internel/watcher"
"github.com/beclab/Olares/daemon/pkg/cluster/state"
"github.com/beclab/Olares/daemon/pkg/nets"
"github.com/beclab/Olares/daemon/pkg/utils"
"github.com/miekg/dns"
"github.com/vishvananda/netlink"
"golang.org/x/sys/unix"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -29,7 +31,7 @@ func NewApplicationWatcher() *applicationWatcher {
func (w *applicationWatcher) Watch(ctx context.Context) {
switch state.CurrentState.TerminusState {
case state.NotInstalled, state.Uninitialized, state.InitializeFailed:
case state.NotInstalled, state.Uninitialized, state.InitializeFailed, state.IPChanging:
// Stop the intranet server if it's running
if w.intranetServer != nil {
w.intranetServer.Close()
@@ -181,6 +183,9 @@ func (w *applicationWatcher) loadServerConfig(ctx context.Context, nodeIp string
return options, nil
}
var adguardDnsPodIp string
var adguardHealth bool
func (w *applicationWatcher) loadDnsPodConfig(ctx context.Context, o *intranet.ServerOptions) error {
// try to find adguard dns pod ip and mac
k8sClient, err := utils.GetKubeClient()
@@ -199,7 +204,36 @@ func (w *applicationWatcher) loadDnsPodConfig(ctx context.Context, o *intranet.S
const adguardDnsAppLabel = "applications.app.bytetrade.io/name"
for _, pod := range dnsPods.Items {
switch {
case pod.Labels[adguardDnsAppLabel] == "adguardhome", pod.Labels["k8s-app"] == "kube-dns":
case pod.Labels[adguardDnsAppLabel] == "adguardhome":
dnsPodIp = pod.Status.PodIP
// try to connect adguard dns pod port 53 to verify it's running
if adguardDnsPodIp != dnsPodIp || !adguardHealth {
adguardDnsPodIp = dnsPodIp
err := checkHealth(dnsPodIp)
if err != nil {
klog.Warning("dial adguard dns pod tcp 53 error, ", err)
adguardHealth = false
} else {
adguardHealth = true
}
}
if adguardHealth {
dnsPodMac, calicoRouteIface, err = getPodNeighborInfo(dnsPodIp)
if err != nil {
klog.Error("get adguard dns pod mac by ip error, ", err)
return err
}
// found adguard dns pod
o.DnsPodIp = dnsPodIp
o.DnsPodMac = dnsPodMac
o.DnsPodCalicoIface = calicoRouteIface
return nil
}
case pod.Labels["k8s-app"] == "kube-dns":
dnsPodIp = pod.Status.PodIP
dnsPodMac, calicoRouteIface, err = getPodNeighborInfo(dnsPodIp)
if err != nil {
@@ -208,13 +242,7 @@ func (w *applicationWatcher) loadDnsPodConfig(ctx context.Context, o *intranet.S
}
}
if pod.Labels[adguardDnsAppLabel] == "adguardhome" {
o.DnsPodIp = dnsPodIp
o.DnsPodMac = dnsPodMac
o.DnsPodCalicoIface = calicoRouteIface
return nil
}
}
} // end for pods
// not found adguard dns pod, but core dns pod exists
if dnsPodIp != "" {
@@ -261,3 +289,15 @@ func getPodNeighborInfo(podIp string) (mac, iface string, err error) {
return "", "", fmt.Errorf("not found pod neighbor info for ip %s", podIp)
}
func checkHealth(server string) error {
c := new(dns.Client)
c.Timeout = time.Second
msg := new(dns.Msg)
msg.SetQuestion(dns.Fqdn("coredns.kube-system.svc.cluster.local."), dns.TypeA)
msg.RecursionDesired = true
_, _, err := c.Exchange(msg, server+":53")
return err
}

View File

@@ -199,8 +199,25 @@ func ListRegistries(ctx *fiber.Ctx) ([]*Registry, error) {
if err != nil {
return nil, err
}
mirrorEndpointHosts := make(map[string]struct{})
for registryName, mirror := range mirrors {
nameToRegistries[registryName] = &Registry{Name: registryName, Endpoints: mirror.Endpoints}
for _, ep := range mirror.Endpoints {
u, perr := url.Parse(ep)
if perr != nil || u == nil {
klog.Errorf("failed to parse mirror endpoint %q for registry %q: %v", ep, registryName, perr)
continue
}
if hn := u.Hostname(); hn != "" {
mirrorEndpointHosts[hn] = struct{}{}
}
if h := u.Host; h != "" {
mirrorEndpointHosts[h] = struct{}{}
}
}
}
for host := range mirrorEndpointHosts {
delete(nameToRegistries, host)
}
images, err := ListImages(ctx, "")
if err != nil {
@@ -220,6 +237,10 @@ func ListRegistries(ctx *fiber.Ctx) ([]*Registry, error) {
continue
}
host := refspec.Hostname()
// skip any images that belong to a registry which is itself a mirror endpoint
if _, isMirrorEndpoint := mirrorEndpointHosts[host]; isMirrorEndpoint {
continue
}
if host == "" {
klog.Errorf("failed to parse image tag %s: empty host", tag)
continue

View File

@@ -340,29 +340,6 @@ const side = {
},
{text: "Dashboard", link: "/manual/olares/resources-usage"},
{text: "Profile", link: "/manual/olares/profile"},
{
text: "Studio",
collapsed: true,
link: "/manual/olares/studio/",
items: [
{
text: "Deploy an app",
link: "/manual/olares/studio/deploy",
},
{
text: "Develop in a dev container",
link: "/manual/olares/studio/develop",
},
{
text: "Package and upload",
link: "/manual/olares/studio/package-upload",
},
{
text: "Add app assets",
link: "/manual/olares/studio/assets",
},
],
},
],
},
{
@@ -647,36 +624,29 @@ const side = {
],
},
{
text: "Develop Olares app",
text: "Develop Olares apps",
link: "/developer/develop/",
items: [
{
text: "Tutorial",
text: "Develop with Studio",
collapsed: true,
link: "/developer/develop/tutorial/",
items: [
{
text: "Learn Studio",
link: "/developer/develop/tutorial/studio",
text: "Deploy an app",
link: "/developer/develop/tutorial/deploy",
},
{
text: "Create your first app",
collapsed: true,
link: "/developer/develop/tutorial/note/",
items: [
{
text: "1. Create app",
link: "/developer/develop/tutorial/note/create",
},
{
text: "2. Develop backend",
link: "/developer/develop/tutorial/note/backend",
},
{
text: "3. Develop frontend",
link: "/developer/develop/tutorial/note/frontend",
},
],
text: "Develop in a dev container",
link: "/developer/develop/tutorial/develop",
},
{
text: "Package and upload",
link: "/developer/develop/tutorial/package-upload",
},
{
text: "Add app assets",
link: "/developer/develop/tutorial/assets",
},
],
},

View File

@@ -13,6 +13,7 @@ import mediumZoom from "medium-zoom";
import OSTabs from "./components/OStabs.vue";
import VersionSwitcher from "./components/VersionSwitcher.vue";
import _ from "lodash";
import { redirects } from './redirects';
const LANGUAGE_LOCAL_KEY = "language";
let isMenuChange = false;
@@ -20,14 +21,27 @@ let isMenuChange = false;
export default {
extends: DefaultTheme,
Layout,
enhanceApp({ app }: { app: App }) {
enhanceApp({ app, router }: { app: App; router: Router }) {
app.component("Tabs", Tabs);
app.component("LaunchCard", LaunchCard);
app.component("FilterableList", FilterableList);
app.component("OSTabs", OSTabs);
app.component("VersionSwitcher", VersionSwitcher);
router.onBeforeRouteChange = (to: string) => {
const path = to.replace(/\.html$/i, ''),
toPath = redirects[path];
if (toPath) {
setTimeout(() => { router.go(toPath); })
return false;
} else {
return true;
}
}
},
setup() {
const route = useRoute();
const router = useRouter();

View File

@@ -0,0 +1,12 @@
export const redirects = {
// Refactor studio
// index page
'/manual/olares/studio/': '/developer/develop/tutorial/',
'/manual/olares/studio/deploy': '/developer/develop/tutorial/deploy',
'/manual/olares/studio/develop': '/developer/develop/tutorial/develop',
'/manual/olares/studio/package-upload': '/developer/develop/tutorial/package-upload',
'/manual/olares/studio/assets': '/developer/develop/tutorial/assets',
'/developer/develop/tutorial/studio': '/developer/develop/tutorial',
'/zh/developer/develop/tutorial/studio': '/zh/developer/develop/tutorial',
}

View File

@@ -616,32 +616,25 @@ const side = {
link: "/zh/developer/develop/",
items: [
{
text: "教程",
text: "使用 Studio 开发",
collapsed: true,
link: "/zh/developer/develop/tutorial/",
items: [
{
text: "了解 Studio",
link: "/zh/developer/develop/tutorial/studio",
text: "部署应用",
link: "/zh/developer/develop/tutorial/deploy",
},
{
text: "创建首个应用",
collapsed: true,
link: "/zh/developer/develop/tutorial/note/",
items: [
{
text: "1. 创建应用",
link: "/zh/developer/develop/tutorial/note/create",
},
{
text: "2. 开发后端",
link: "/zh/developer/develop/tutorial/note/backend",
},
{
text: "3. 开发前端",
link: "/zh/developer/develop/tutorial/note/frontend",
},
],
text: "使用开发容器",
link: "/zh/developer/develop/tutorial/develop",
},
{
text: "打包与上传",
link: "/zh/developer/develop/tutorial/package-upload",
},
{
text: "添加应用素材",
link: "/zh/developer/develop/tutorial/assets",
},
],
},

View File

@@ -1,15 +1,30 @@
# Develop Olares application
# Develop Olares applications
Developing applications on Olares is not much different from regular website development. Once you learn a few basic Olares concepts, you can start creating applications on his platform.
Developing applications on Olares leverages standard web technologies and containerization. If you are familiar with building web applications or Docker containers, you already have the skills needed to build for Olares.
- [Core Concepts of Olares](../concepts/index.md)
- [Understanding the Format of Olares Application Chart](./package/chart.md)
- [The structure of the Olares Application Chart](./package/chart.md)
- [Configuration guide and field descriptions of `OlaresManifest.yaml`](./package/manifest.md)
- [Extensions field to Helm in Olares](./package/extension.md)
This guide takes you through the complete lifecycle of an Olares application, from your first line of code in Studio to publishing on the Market.
- [Exploring Our Tutorials](./tutorial/)
- [Learn about Studio, an Olares Development Tool](./tutorial/studio)
- [Creating your first application](./tutorial/note/)
- [Exploring Advanced Concepts](./advanced/)
- [Submitting Applications to the Olares Market](./submit/)
## Before you begin
Before getting started, it's helpful to review some concepts:
- [Application](../concepts/application.md)
- [Network](../concepts/network.md)
## Step 1: Develop with Studio
Olares Studio is a development platform that accelerates your build cycle. It provides a pre-configured workspace to build, debug, and test your applications directly on the platform.
* **[Deploy an app](./tutorial/deploy.md)**: Learn how to quickly deploy an app from an existing Docker image, configure it, and test it in Studio.
* **[Develop in a dev container](./tutorial/develop.md)**: Spin up a remote development environment (Dev Container) and connect it to VS Code for a seamless coding experience.
* **[Package and upload](./tutorial/package-upload.md)**: Convert your running application into an Olares-compatible package and upload it for testing.
* **[Add app assets](./tutorial/assets.md)**: Configure icons, screenshots, and descriptions to make your application store-ready.
## Step 2: Package your application
To publish your application to the Olares Market, you must structure it according to the Olares Application Chart (OAC) specification. This format extends Helm Charts to support Olares-specific features like permission management and sandboxing.
* **[Understand the Olares Application Chart](./package/chart.md)**: Understand the file structure and requirements of an application package.
* **[Understand `OlaresManifest.yaml`](./package/manifest.md)**: A comprehensive guide to the `OlaresManifest.yaml` file, which defines your app's metadata, permissions, and system integration points.
* **[Understand Helm extensions](./package/extension.md)**: Learn about the custom fields and capabilities Olares adds to standard Helm deployments.
## Step 3: Submit your application
Once your application is built and packaged, the final step is to share it with the Olares community.
* **[Submit to Market](./submit/index.md)**: Learn how to submit your application to the Olares Market for review and distribution.

View File

@@ -17,8 +17,8 @@ outline: [2, 3]
### 1. Develop and test your application
Before submitting an application, please ensure that it has been thoroughly tested on your Olares.
- Use DevBox's dev-container to test and debug your application in a real online environment. [Learn more about DevBox](../tutorial/studio).
- Use the [custom installation](/manual/olares/market.md#install-custom-applications) in the Market app for user testing.
- Use Studio's dev-container to test and debug your application in a real online environment. [Learn more about Studio](../tutorial/).
- Use the [custom installation](../tutorial/package-upload.md) in the Market app for user testing.
### 2. Submit an application
The submission of the application needs to be completed through a **Pull Request**. Here's how:

View File

@@ -6,7 +6,7 @@ description: Deploy a single-container Docker app to Olares using Studio.
This guide explains how to deploy a single-container Docker app to Olares using Studio.
:::info For single-container apps
This method supports apps that run from a single container image. For multi-container apps (for example, a web service plus a separate database), use the workflow in the [developer documentation](../../../developer/develop/tutorial/index.md) instead.
This method supports apps that run from a single container image.
:::
:::tip Recommended for testing
Studio-created deployments are best suited for development, testing, or temporary use. Upgrades and long-term data persistence can be limited compared to installing a packaged app from the Market. For production use, consider [packaging and uploading the app](package-upload.md) and installing it via the Market.
@@ -44,7 +44,6 @@ services:
- "8282:80/tcp"
environment:
TZ: 'America/Toronto'
# Volumes store your data between container upgrades
volumes:
- './db:/var/www/html/db'
- './logos:/var/www/html/images/uploads/logos'
@@ -67,7 +66,7 @@ These fields define the app's core components. You can find this information as
:::
3. For **Instance Specifications**, enter the minimum CPU and memory requirements. For example:
- **CPU**: 2 core
- **Memory**: 1 G
- **Memory**: 1 Gi
![Deploy Wallos](/images/manual/olares/studio-deploy-wallos.png#bordered)
### Add environment variables
@@ -83,7 +82,7 @@ Environment variables are used to pass configuration settings to your app. In th
Volumes connect storage on your Olares device to a path inside the app's container, which is essential for saving data permanently. These are defined using the `-v` flag or in the `volumes:` section.
:::info Host path options
The host path is where Olares stores the data, and the mount path is the path inside the container. Olares provides three managed host path prefixes:
The host path is where Olares stores the data, and the mount path is the path inside the container. Studio provides three managed host path prefixes:
- `/app/data`: App data directory. Data can be accessed across nodes and is not deleted when the app is uninstalled. Appears under `/Data/studio` in Files.
- `/app/cache`: App cache directory. Data is stored in the node's local disk and is deleted when the app is uninstalled. Appears under `/Cache/<device-name>/studio` in Files.
@@ -96,15 +95,23 @@ The host path is where Olares stores the data, and the mount path is the path in
This app requires two volumes. You will add them one by one.
1. Add the database volume. This data is for high-frequency I/O and does not need to be saved permanently. Map it to `/app/cache` so it will be automatically deleted when the app is uninstalled.
1. Click **Add** next to **Storage Volume**.
2. For **Host path**, select `/app/cache`, then enter `/db`.
3. For **Mount path**, enter `/var/www/html/db`.
4. Click **Submit**.
2. Add the logo volume. This is user-uploaded data that should be persistent and reusable, even if the app is reinstalled. Map it to `/app/data`.
1. Click **Add** next to **Storage Volume**.
2. For **Host path**, select `/app/data`, then enter `/logos`.
3. For **Mount path**, enter `/var/www/html/images/uploads/logos`
4. Click **Submit**.
a. Click **Add** next to **Storage Volume**.
b. For **Host path**, select `/app/cache`, then enter `/db`.
c. For **Mount path**, enter `/var/www/html/db`.
d. Click **Submit**.
2. Add the logo volume. This is user-uploaded data that should be persistent and reusable, even if the app is reinstalled. Map it to `/app/data`.
a. Click **Add** next to **Storage Volume**.
b. For **Host path**, select `/app/data`, then enter `/logos`.
c. For **Mount path**, enter `/var/www/html/images/uploads/logos`.
d. Click **Submit**.
![Add volumes](/images/manual/olares/studio-add-storage-volumes.png#bordered)
You can check Files later to verify the mounted paths.
@@ -118,37 +125,36 @@ If your app needs Postgres or Redis, enable it under **Instance Specifications**
![Enable databases](/images/manual/olares/studio-enable-databases.png#bordered)
When enabled, Studio provides dynamic variables. You must use these variables in the **Environment Variables** section for your app to connect to the database.
- **Postgres variables:**
- **Postgres variables**
| Variables | Description |
|--------------|-----------------------|
| $(PG_USER) | PostgreSQL username |
| $(PG_DBNAME) | Database name |
| $(PG_PASS) | Postgres Password |
| $(PG_HOST) | Postgres service host |
| $(PG_PORT) | Postgres service port |
| Variables | Description |
|----------------|-----------------------|
| `$(PG_USER)` | PostgreSQL username |
| `$(PG_DBNAME)` | Database name |
| `$(PG_PASS)` | Postgres Password |
| `$(PG_HOST)` | Postgres service host |
| `$(PG_PORT)` | Postgres service port |
- **Redis variables:**
- **Redis variables**
| Variables | Description |
|---------------|--------------------|
| $(REDIS_HOST) | Redis service host |
| $(REDIS_PORT) | Redis service port |
| $(REDIS_USER) | Redis username |
| $(REDIS_PASS) | Redis password |
| Variables | Description |
|-----------------|--------------------|
| `$(REDIS_HOST)` | Redis service host |
| `$(REDIS_PORT)` | Redis service port |
| `$(REDIS_USER)` | Redis username |
| `$(REDIS_PASS)` | Redis password |
### Generate the app project
1. Once all your configurations are set, click **Create**. This generates the app's project files.
2. After creation, Studio generates the package files for your app, and then automatically deploys the app. You can check the status in the bottom bar.
3. When the app is successfully deployed, click **Preview** in the top-right corner to launch it.
![Preveiw wallos](/images/manual/olares/studio-preview-wallos.png#bordered)
![Preview Wallos](/images/manual/olares/studio-preview-wallos.png#bordered)
## Review the package files and test the app
Apps deployed from Studio include a `-dev` suffix in the title to distinguish them from Market installations.
![Check deployed app](/images/manual/olares/studio-app-with-dev-suffix.png#bordered)
You can click on files like `OlaresManifest.yaml` to review and make changes. For example, to change the app's display name and logo.
You can click on files like `OlaresManifest.yaml` to review and make changes. For example, to change the app's display name and logo:
1. Click **<span class="material-symbols-outlined">box_edit</span>Edit** in the top-right to open the editor.
2. Click `OlaresManifest.yaml` to view the content.
@@ -164,7 +170,6 @@ You can click on files like `OlaresManifest.yaml` to review and make changes. Fo
:::
![Change app icon](/images/manual/olares/studio-change-app-icon1.png#bordered)
## Uninstall or delete the app
If you no longer need the app, you can remove it.
1. Click <span class="material-symbols-outlined">more_vert</span> in the top-right corner.
@@ -175,12 +180,11 @@ If you no longer need the app, you can remove it.
## Troubleshoot a deployment
### Cannot install the app
If installation fails, review the error at the bottom of the page and click **View** to expand details.
![Check app status](/images/manual/olares/studio-check-app-status.png#bordered)
If installation fails, review the error at the bottom of the page and click **View** to check details.
### Run into issues when the app is running
Once running, you can manage the app from its deployment details page in Studio. The interface of this page is similar to Control Hub. If details don't appear, refresh the page.
You can:
- Use the **Stop** and **Restart** controls to retry. This action can often resolve runtime issues like a frozen process.
- Check events or logs to investigate runtime errors. See [Export container logs for troubleshooting](../controlhub/manage-container.md#export-container-logs-for-troubleshooting) for details.
- Use the **Stop** or **Restart** controls to retry. This action can often resolve runtime issues like a frozen process.
- Check events or logs to investigate runtime errors. See [Export container logs for troubleshooting](../../../manual/olares/controlhub/manage-container.md#export-container-logs-for-troubleshooting) for details.
![App deployment details](/images/manual/olares/studio-app-deployment-details.png#bordered)

View File

@@ -5,10 +5,7 @@ description: Learn how to use Studio to set up a dev container, access it via VS
# Develop in a dev container
Olares Studio allows you to spin up a pre-configured dev container to write and debug code (such as Node.js scripts or CUDA programs) without managing local infrastructure. This provides an isolated environment identical to the production runtime.
The following guide shows the setup workflow using a Node.js project as an example.
:::info
This workflow is optimized for iterative coding and testing. If you intend to publish the application to the Olares Market, you must create your own image and follow the [developer documentation](../../../developer/develop/submit/index.md) for final configuration.
:::
The following guide shows the development and setup workflow using a Node.js project as an example.
## Prerequisite
- Olares version 1.12.2 or later.
@@ -56,7 +53,7 @@ If you prefer your local settings and extensions, you can tunnel into the contai
code tunnel
```
5. Follow the terminal prompts to authenticate using a Microsoft or GitHub account via the provided URL.
6. Assign a name to the tunnel when prompted (e.g., `myapp-demo`). This will output a vscode.dev URL tied to this remote workspace.
6. Assign a name to the tunnel when prompted (e.g., `myapp-demo`). This will output a `vscode.dev` URL tied to this remote workspace.
![Create a secure tunnel](/images/manual/olares/studio-create-a-secure-tunnel.png#bordered)
7. Open VS Code on your local machine, click the **><** icon in the bottom-left, and select **Tunnel**.
@@ -183,6 +180,9 @@ You can follow the same steps to modify `OlaresManifest.yaml` and `deployment.ya
- name: "80"
port: 80
targetPort: 80
- name: myweb-dev-8080
port: 8080
targetPort: 8080
# Add the following
- name: myweb-dev-8081 # Must match entrance name
port: 8081
@@ -195,10 +195,10 @@ You can follow the same steps to modify `OlaresManifest.yaml` and `deployment.ya
4. Click **Apply** to redeploy the container.
You can verify the active ports in **Services** > **Ports**.
Once deployed, go to **Services** > **Ports**. You can see your new port listed here.
![Verify active ports](/images/manual/olares/studio-verify-active-ports.png#bordered)
### Verify the new port
### Test the connection
1. Update `index.js` to listen on the new port:
```js
const express = require('express');

View File

@@ -1,15 +1,26 @@
# Tutorial
---
description: Get started with Studio to deploy Docker-based apps, develop new apps, package and upload locally, and manage assets on your Olares device.
---
# Deploy and develop apps in Olares
Welcome to the Olares developer guides. These detailed tutorials offer a step-by-step guide on building an Olares Application from scratch.
Studio provides a real Olares environment for building, porting, and testing apps when cloud features and the sandbox system are hard to simulate locally. With Studio you can:
- Create a new Olares app in an online development container.
- Port an existing app, adjust its configurations, and test the installation flow.
- Package your app into a chart and download it when your app is ready.
To get started, you can learn some basic concepts of Olares, such as:
- [Olares architectural components](../../concepts/architecture.md)
- [Olares Application Chart](../../develop/package/chart.md)
- [Olares Extension on Helm](../package/extension.md)
## Access Studio
You must manually install Studio:
1. Open **Market**, and search for "Studio".
![Studio](/images/manual/olares/studio.png#bordered)
These fundamentals will help you grasp our development process more effectively.
2. Click **Get**, then **Install**, and wait for installation to complete.
You can also [learn about DevBox](studio.md), a built-in app that Olares provides for developers to build Olares applications.
After installation, launch Studio from Market or from Launchpad.
If you're brand new to Olares development and want to jump straight into coding, start with the [**Create your first Olares app**](./note/index.md). This tutorial will step you through the process of building a small note application.
## Understand the Studio UI
The sidebar and **Home** page organize your main tasks in Studio:
- **Home**: A welcome page with shortcuts to common actions and documentation.
- **Applications**: A list of apps you have created and deployed with Studio.
- **Start**: You can start deploying or developing apps, or uploading an app from a local chart file.
![Understand Studio user interface](/images/manual/olares/studio-ui.png#bordered)

View File

@@ -8,7 +8,7 @@ Apps created in Studio are ideal for development and testing. For stable, long-t
## Download the App package from Studio
After confirming that your app works as expected, you can download its complete installation package.
After confirming that your app works as expected, you can download its complete installation package:
1. Open your app project in **Studio**.
2. Click <span class="material-symbols-outlined">more_vert</span> in the top-right corner.
@@ -24,6 +24,4 @@ After confirming that your app works as expected, you can download its complete
Once finished, you can click **Open** to launch it.
All custom-installed apps will appear under the **My Olares** > **Upload** tab.
All custom-installed apps will appear under the **My Olares** > **Upload** tab.

View File

@@ -1,13 +0,0 @@
# Learn about Studio
At Olares, we provide a development tool called Studio. It helps developers create applications for **Olares**.
- Why is Studio necessary for developers?
Olares has many cloud-based features that are difficult to simulate in a standalone development environment. Furthermore, the unique sandbox system of **Olares** requires a real system environment for end-to-end testing. To simplify app simulation for developers and minimize system integration efforts during development, we provide the **Studio**. **Studio** is a quick, automatic toolset for creating app sandboxes.
- What features does Studio have?
- In Studio, you can build an app and generate a corresponding Olares Application Configuration* This deployment files can be modified, allowing you to port an existing app and deploy it to the Olares. During the modification process, you can continually attempt installation and resolve any issues that arise. Once the app passes your tests, you can download your Application Chart and submit it to the [Olares Market Repository](https://github.com/beclab/apps).
- In addition to porting existing apps, you can also create a native Olares application in Studio. Studio provides an online development container where coders can work in a real environment, utilize other system interfaces, database clusters, and more.

View File

@@ -1,49 +0,0 @@
---
description: Get started with Studio to deploy Docker-based apps, develop new apps, package and upload locally, and manage assets on your Olares device.
---
# Deploy and develop apps in Olares
Studio provides a real Olares environment for building, porting, and testing apps when cloud features and the sandbox system are hard to simulate locally. With Studio you can:
- Create a new Olares app in an online development container.
- Port an existing app, adjust its configurations, and test the installation flow.
-Package your app into a chart and download it when your app is ready.
## Access Studio
Studio is available in Olares Market and must be installed manually.
1. Open **Market**, and search for "Studio".
![Studio](/images/manual/olares/studio.png#bordered)
2. Click **Get**, then **Install**, and wait for installation to complete.
After installation, launch Studio from Market or from Launchpad.
## Understand the Studio UI
The sidebar and **Home** page organize your main tasks in Studio:
- **Home**: A welcome page with shortcuts to common actions and documentation.
- **Applications**: A list of apps you have created and deployed with Studio.
- **Start**: You can start deploying or developing apps, or uploading an app from a local chart file.
![Understand Studio user interface](/images/manual/olares/studio-ui.png#bordered)
---
<div>
<h4><a href="./deploy">Deploy an app from Docker image</a></h4>
Deploy an app from an existing Docker image, configure it, and test it in Studio.
</div>
<div>
<h4><a href="./develop">Develop in a dev container</a></h4>
Build and debug a new app using the Studio development environment.
</div>
<div>
<h4><a href="./package-upload">Package and upload the app to Market</a></h4>
Download an installable package and upload it to Market for local use.
</div>
<div>
<h4><a href="./assets">Add app assets</a></h4>
Use Olares image hosting to add and manage creative assets for your app.
</div>

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Some files were not shown because too many files have changed in this diff Show More