Compare commits

...

74 Commits

Author SHA1 Message Date
lovehunter9
f823625c8e feat: files-server delete all for search3 2025-06-06 17:19:34 +08:00
lovehunter9
1a06378e50 feat: files-server batch_delete 2025-06-06 14:18:35 +08:00
aby913
50f6b127ac backup-server: improve message (#1405) 2025-06-06 00:29:11 +08:00
hysyeah
df23dc64e3 app-service,bfl: fix upgrade failed bug,add appid to pod label;fix call analytics-server (#1404)
* app-service,bfl: fix upgrade failed bug,add appid to pod label;fix call analytics-server

* fix(user-service): add nats env

---------

Co-authored-by: qq815776412 <815776412@qq.com>
2025-06-06 00:28:40 +08:00
lovehunter9
f704cf1846 fix: files-server bug when listing external if any smb folder is stated as host is down (#1403) 2025-06-06 00:27:40 +08:00
simon
66d0eccb2f feat(knowledge): websocket update (#1402)
websocket
2025-06-06 00:27:09 +08:00
aby913
a226fd99b8 refactor: CLI code refactor (#1401)
* refactor: remove unused account files

* refactor: remove unused socat task

* refactor: remove unused flex conntrack task

* refactor: remove unused cri download binaries module

* refactor: remove hook demo

* refactor: remove unused repositoryOnline, repository modules

* refactor: remove unused os rollback

* refactor: remove unused clear node os module

* refactor: remove unused backup dir

* refactor: remove unused local repo manager

* refactor: remove unused cluster pre check module and tasks

* refactor: remove unused cri migrate module

* refactor: remove unused k3s uninstall module and tasks

* refactor: remove unused k8s node delete module

* refactor: remove unused phase startup

* refactor: remove unused storage minio operator module

* refactor: remove unused ks modules

* refactor: remove unused ks plugins cache, redis tasks

* refactor: remove unused ks plugins snapshot controller module

* refactor: remove unused ks plugins monitor notification module

* refactor: remove unused plugins kata and nfd

* refactor: remove unused scripts

* refactor: remove unused filesystem module

* refactor: remove unused certs modules

* refactor: remove unused bootstrap confirm modules

* refactor: remove unused images tasks

* refactor: remove unused k8s prepares

* refactor: remove unused installer module

* refactor: remove unused registry modules
2025-06-06 00:26:37 +08:00
huaiyuan
60b823d9db desktop: update version to v1.3.70 (#1400)
fix(desktop): update version to v1.3.70
2025-06-06 00:24:33 +08:00
wiy
7b9be6cce7 feat(vault-server&user-service): update user server & vault-server support websocket (#1408)
feat(vault-server&settings&user-service): update user server & vault-server support websocket
2025-06-06 00:23:52 +08:00
eball
b99fc51cc2 gpu: fix gpu scheduler bugs (#1407) 2025-06-06 00:19:38 +08:00
salt
cdf70c5c58 fix: fix resources conflict for search3monitor (#1406)
Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
2025-06-05 22:59:00 +08:00
Peng Peng
1c7fa01df8 fix: remove duplicate container in gpu yaml and notification yaml in user space (#1398) 2025-06-05 14:32:54 +08:00
salt
2b4b590a3a feat: add file monitor for data, drive, external, cache. (#1397)
* feat: search3 add monitor

* fix: add SecurityContext for monitor

* fix: monitor init generate_monitor_folder_path_from_data_root

---------

Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
2025-06-05 14:32:20 +08:00
Peng Peng
2bef0056d3 feat: add kvroks dependency (#1399) 2025-06-05 14:31:50 +08:00
Peng Peng
da5ad17e7b refactor: change files, monitor, vault from apps to framework 2025-06-05 11:54:37 +08:00
hysyeah
3b14b95469 app-service,bfl: gpu namespace netpol;refresh token api;nats user perm (#1395)
* app-service,bfl: gpu namespace netpol;refresh token api;nats user perm

* add knowledge, market nats

* Update system-frontend.yaml

---------

Co-authored-by: eball <liuy102@hotmail.com>
2025-06-05 01:12:52 +08:00
berg
d0a5da4266 market, settings: update version to v1.3.69 (#1396)
feat: update market and settings to v1.3.69
2025-06-05 00:26:12 +08:00
dkeven
a2efa54140 feat: dedicated namespace for gpu-scheduler (#1394) 2025-06-05 00:05:15 +08:00
dkeven
f0106180d5 fix(daemon): reset upgrade target when not upgrading (#1390) 2025-06-04 21:52:57 +08:00
dkeven
9261253126 feat: get rid of nvshare (#1389) 2025-06-04 21:50:46 +08:00
lovehunter9
16f554ed54 feat: seafile separate image (#1383) 2025-06-04 20:48:40 +08:00
dkeven
ac212583ea fix(ci): pass in git ref when calling workflow for submodule (#1392) 2025-06-04 18:34:28 +08:00
dkeven
186d6dd309 fix(ci): use correct daily release version for daemon (#1388) 2025-06-04 11:33:00 +08:00
lovehunter9
79f96c94f7 fix: files sync dir rename bug (#1387) 2025-06-03 23:45:46 +08:00
hysyeah
5bd1bd2ab9 kubesphere,app-service: add disk partion metric; (#1386)
kubesphere,app-service: add disk partion metric;fix cancel op ctx
2025-06-03 23:45:19 +08:00
wiy
6be4e1ff6e feat(system-frontend): update user-service support web socket (#1385)
* feat(system-frontend): update user-service support web socket

* feat: rename monitoring to system-apps
2025-06-03 23:44:51 +08:00
aby913
df722bf1cd backup-server: package name adjustment (#1384)
backup-server: package rename
2025-06-03 23:44:22 +08:00
eball
d428295fa5 bfl: crash and bulk http clients (#1382) 2025-06-03 23:43:49 +08:00
dkeven
7cecd9d360 refactor: integrate Olares daemon's code & CI into main repo (#1381) 2025-06-03 17:37:37 +08:00
simon
a48de4efd4 knowledge: fix backup & remove entry file bugs (#1380)
knowledge
2025-06-03 11:11:44 +08:00
berg
d8078cc8ce market: modify the market app status based on the new version status of appService (#1379)
feat: modify the market app status based on the new version status of appService
2025-06-02 23:15:20 +08:00
hysyeah
f4d9487d1f app-service: fix cancel operation context (#1378) 2025-05-31 23:01:32 +08:00
eball
b5121bde2e analytics: fix typo (#1377) 2025-05-31 23:00:56 +08:00
dkeven
5f79f7fbe4 fix(cli): mitigate some security issues by bumping dependency versions (#1375) 2025-05-30 22:28:59 +08:00
lovehunter9
df6f0bf2d8 feat: files: path unified uri, copy task & mounted data (#1376) 2025-05-30 21:57:02 +08:00
dkeven
21be331121 fix(cli): lift cuda version restraint to 12.8 (#1374) 2025-05-30 21:55:00 +08:00
dkeven
cff07d4c2b fix(cli): just install a single instance of GPU driver (#1372) 2025-05-30 21:51:54 +08:00
hysyeah
a371b3ce44 cli,kubesphere: add some memory metrics (#1371)
cli,ks: add some memory metrics
2025-05-30 21:48:26 +08:00
Calvin W.
2712202c48 docs: update readme structure (#1373)
* docs: update readme structure

* revert format change

* add personal cloud image for jp
2025-05-30 15:36:58 +08:00
hysyeah
7b17f3b2a4 app-service: fix some state bug (#1370) 2025-05-30 00:33:59 +08:00
aby913
cc6b2c9239 backup-server: support app restore (#1369) 2025-05-30 00:33:39 +08:00
wiy
46df22854d fix(vault & files): frontend nginx config error (#1366)
* fix(desktop): fixed the issue that the customized desktop background image does not display

* feat: update login & settings & profile version

* fix(vault & files):  nginx  error

* fix: vault.conf error
2025-05-29 20:27:54 +08:00
eball
eec03ee9b4 bfl: add a new olares-info api (#1365) 2025-05-29 20:25:11 +08:00
dkeven
0c5a80653e feat: schedule/allocate pod by gpu bindings and different share modes (#1363) 2025-05-29 20:24:53 +08:00
dkeven
e58743fa87 fix(cli): remove the local flag in local release version (#1361) 2025-05-29 20:10:44 +08:00
dkeven
d5673b81e0 fix(cli): also consider 3D controller when detecting GPU by lspci (#1360) 2025-05-29 20:07:39 +08:00
hysyeah
37e37a814d olares: add nats info for system files,vault,seafile,search,notification (#1359) 2025-05-29 20:05:09 +08:00
Calvin W.
73d484b681 docs: update olares arch image (#1364)
* docs: update olares arch image

* add a wrap in title
2025-05-29 17:47:28 +08:00
Calvin W.
ddf10130f0 docs: update illustration for personal cloud (#1362)
* docs: update illustration for personal cloud

* update link

* refine wording and add system app screenshots back
2025-05-29 17:08:32 +08:00
hysyeah
5e0534cc2c app-service: app install state (#1358) 2025-05-28 23:49:31 +08:00
wiy
58a7ce05b8 fix(desktop): that the customized desktop background image does not display (#1357)
* fix(desktop): fixed the issue that the customized desktop background image does not display

* feat: update login & settings & profile version
2025-05-28 23:48:29 +08:00
Peng Peng
448a5c1551 fix(notification): fix crash issue (#1356) 2025-05-28 23:47:58 +08:00
dkeven
4e7ba01bcd cli(refactor): adjust local release logic for new project structure (#1355) 2025-05-28 23:47:16 +08:00
wiy
a034b37239 fix(desktop): websocket config error (#1354)
* feat: move files&vault&desktop&market to system frontend

* feat: fix market entrance error

* fix: app nginx config format error

* feat: delete files deploy

* feat: remove desktop deploy

* fix(system-frontend): fix ci build error & desktop add ws config

* fix(system-frontend): uploads-temp double error

* Update market_deploy.yaml

* Update system-frontend.yaml

* fix(desktop): ws config error

---------

Co-authored-by: eball <liuy102@hotmail.com>
2025-05-28 23:46:09 +08:00
Peng Peng
bf17a91062 feat: remove unused permission 2025-05-28 11:56:57 +08:00
Peng Peng
76d62daf32 feat(notification): change ci method and reduce docker image size (#1353)
feat(notification): change ci method
2025-05-28 01:48:16 +08:00
wiy
907fbf681e feat: move files & vault & market & desktop frontend to system frontend (#1351)
* feat: move files&vault&desktop&market to system frontend

* feat: fix market entrance error

* fix: app nginx config format error

* feat: delete files deploy

* feat: remove desktop deploy

* fix(system-frontend): fix ci build error & desktop add ws config

* fix(system-frontend): uploads-temp double error

* Update market_deploy.yaml

* Update system-frontend.yaml

---------

Co-authored-by: eball <liuy102@hotmail.com>
2025-05-27 23:42:46 +08:00
dkeven
1e1b6a5007 fix(cli): update CUDA version in node labels after upgrading GPU driver (#1352) 2025-05-27 17:51:43 +08:00
dkeven
ea6e199e8e fix(otel): specify auto instrumentation image for nodejs service (#1350) 2025-05-27 17:51:11 +08:00
simon
a323d03fe5 knowledge: add backup function (#1349)
knowledge to v0.12.6
2025-05-27 17:48:43 +08:00
aby913
9a984ea34f backup-server: support app backup (#1348) 2025-05-27 17:47:14 +08:00
hysyeah
355b805540 kubesphere,node-exporter: add metric data_bytes_written, data_bytes_read (#1347) 2025-05-27 17:46:47 +08:00
Calvin W.
5936da1268 docs: add nas comparison doc link (#1346)
* docs: add nas comparison doc link

* fix format
2025-05-27 17:45:58 +08:00
dkeven
c36ff0a630 fix(ci): pass correct version var when deploying in CI (#1345) 2025-05-26 19:18:04 +08:00
dkeven
9091d382cb fix(ci): upload in correct cli artifacts output path (#1344) 2025-05-26 18:23:38 +08:00
dkeven
22fdd7b86f refactor: integrate CLI's code & CI into main repo (#1343) 2025-05-26 17:21:25 +08:00
hysyeah
532b0a3e24 app-service: app installation refactor (#1342)
app-service: app install refactor
2025-05-26 01:57:19 +08:00
Peng Peng
1371f5aed2 docs: Add a note indicating that the code repository is under migration. (#1341) 2025-05-23 22:49:26 +08:00
Calvin W.
6f6f7cd7a2 docs: update project directory info and intro (#1340)
* docs: update project directory info and intro

* update intro for cn and urls
2025-05-23 21:32:13 +08:00
eball
2c41b1ff8e hami: gpu slicing scheduler (#1339) 2025-05-22 23:35:36 +08:00
hysyeah
85527f46f1 ks: update cronjob gv to batch/v1 (#1338) 2025-05-22 23:34:44 +08:00
eball
9cca15c677 tapr: add roles to pg user (#1337) 2025-05-22 23:33:55 +08:00
aby913
a29653d16c backup-server: code refactoring and process improvement (#1336) 2025-05-22 14:43:50 +08:00
eball
f2235e8f49 olares: compatible with current version olares-cli (#1335)
* olares: compatible with current version olares-cli

* fix: release workflows bug
2025-05-22 01:01:15 +08:00
700 changed files with 107038 additions and 5015 deletions

View File

@@ -48,6 +48,32 @@ jobs:
# if: steps.list-changed.outputs.changed == 'true'
# run: ct install --chart-dirs wizard/charts,wizard/config --target-branch ${{ github.event.repository.default_branch }}
test-version:
runs-on: ubuntu-latest
outputs:
version: ${{ steps.generate.outputs.version }}
steps:
- id: generate
run: |
v=1.12.0-$(echo $RANDOM)
echo "version=$v" >> "$GITHUB_OUTPUT"
upload-cli:
needs: test-version
uses: ./.github/workflows/release-cli.yaml
secrets: inherit
with:
version: ${{ needs.test-version.outputs.version }}
ref: ${{ github.event.pull_request.head.ref }}
upload-daemon:
needs: test-version
uses: ./.github/workflows/release-daemon.yaml
secrets: inherit
with:
version: ${{ needs.test-version.outputs.version }}
ref: ${{ github.event.pull_request.head.ref }}
push-image:
runs-on: ubuntu-latest
@@ -89,6 +115,7 @@ jobs:
push-deps:
needs: [test-version, upload-daemon]
runs-on: ubuntu-latest
steps:
@@ -104,10 +131,12 @@ jobs:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"
VERSION: ${{ needs.test-version.outputs.version }}
run: |
bash build/deps-manifest.sh && bash build/upload-deps.sh
push-deps-arm64:
needs: [test-version, upload-daemon]
runs-on: [self-hosted, linux, ARM64]
steps:
@@ -126,54 +155,52 @@ jobs:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"
VERSION: ${{ needs.test-version.outputs.version }}
run: |
export PATH=$PATH:/usr/local/bin:/home/ubuntu/.local/bin
bash build/deps-manifest.sh linux/arm64 && bash build/upload-deps.sh linux/arm64
install-test:
needs: [lint-test, push-image, push-image-arm64, push-deps, push-deps-arm64]
upload-package:
needs: [lint-test, test-version, push-image, push-image-arm64, push-deps, push-deps-arm64]
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
ref: ${{ github.event.pull_request.head.ref }}
repository: ${{ github.event.pull_request.head.repo.full_name }}
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
ref: ${{ github.event.pull_request.head.ref }}
repository: ${{ github.event.pull_request.head.repo.full_name }}
- name: 'Test tag version'
id: vars
run: |
v=1.12.0-$(echo $RANDOM)
echo "tag_version=$v" >> $GITHUB_OUTPUT
- name: Package installer
run: |
bash build/build.sh ${{ needs.test-version.outputs.version }}
- name: Package installer
run: |
bash build/build.sh ${{ steps.vars.outputs.tag_version }}
- name: Upload package
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: 'us-east-1'
run: |
md5sum install-wizard-v${{ needs.test-version.outputs.version }}.tar.gz > install-wizard-v${{ needs.test-version.outputs.version }}.md5sum.txt && \
aws s3 cp install-wizard-v${{ needs.test-version.outputs.version }}.md5sum.txt s3://terminus-os-install/install-wizard-v${{ needs.test-version.outputs.version }}.md5sum.txt --acl=public-read && \
aws s3 cp install-wizard-v${{ needs.test-version.outputs.version }}.tar.gz s3://terminus-os-install/install-wizard-v${{ needs.test-version.outputs.version }}.tar.gz --acl=public-read
- name: Upload package
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: 'us-east-1'
run: |
md5sum install-wizard-v${{ steps.vars.outputs.tag_version }}.tar.gz > install-wizard-v${{ steps.vars.outputs.tag_version }}.md5sum.txt && \
aws s3 cp install-wizard-v${{ steps.vars.outputs.tag_version }}.md5sum.txt s3://terminus-os-install/install-wizard-v${{ steps.vars.outputs.tag_version }}.md5sum.txt --acl=public-read && \
aws s3 cp install-wizard-v${{ steps.vars.outputs.tag_version }}.tar.gz s3://terminus-os-install/install-wizard-v${{ steps.vars.outputs.tag_version }}.tar.gz --acl=public-read
install-test:
needs: [test-version, upload-cli, upload-package]
runs-on: ubuntu-latest
steps:
- name: Deploy Request
uses: fjogeleit/http-request-action@v1
with:
url: 'https://cloud-dev-api.bttcdn.com/v1/resource/installTest'
method: 'POST'
customHeaders: '{"Authorization": "${{ secrets.INSTALL_SECRET }}"}'
data: 'versions=${{ steps.vars.outputs.tag_version }}&downloadUrl=https://dc3p1870nn3cj.cloudfront.net/install-wizard-v${{ steps.vars.outputs.tag_version }}.tar.gz'
data: 'versions=${{ needs.test-version.outputs.version }}&downloadUrl=https://dc3p1870nn3cj.cloudfront.net/install-wizard-v${{ needs.test-version.outputs.version }}.tar.gz'
contentType: "application/x-www-form-urlencoded"
- name: Check Reault
- name: Check Result
uses: eball/poll-check-endpoint@v0.1.0
with:
url: https://cloud-dev-api.bttcdn.com/v1/resource/installResult
@@ -184,4 +211,4 @@ jobs:
timeout: 1800000
interval: 30000
customHeaders: '{"Authorization": "${{ secrets.INSTALL_SECRET }}", "Content-Type": "application/x-www-form-urlencoded"}'
data: 'versions=${{ steps.vars.outputs.tag_version }}'
data: 'versions=${{ needs.test-version.outputs.version }}'

View File

@@ -33,5 +33,5 @@ jobs:
- name: Run chart-testing (lint)
run: |
ct lint --chart-dirs build/installer/wizard/config,build/installer/wizard/config/apps,build/installer/wizard/config/gpu --check-version-increment=false --all
ct lint --chart-dirs .dist/wizard/config,.dist/wizard/config/apps,.dist/wizard/config/gpu --check-version-increment=false --all

56
.github/workflows/release-cli.yaml vendored Normal file
View File

@@ -0,0 +1,56 @@
name: Release CLI
on:
workflow_call:
inputs:
version:
type: string
required: true
ref:
type: string
workflow_dispatch:
jobs:
goreleaser:
runs-on: ubuntu-22.04
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 1
ref: ${{ inputs.ref }}
- name: Add Local Git Tag For GoReleaser
run: git tag ${{ inputs.version }}
continue-on-error: true
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.24.3
- name: Install x86_64 cross-compiler
run: sudo apt-get update && sudo apt-get install -y build-essential
- name: Install ARM cross-compiler
run: sudo apt-get update && sudo apt-get install -y gcc-arm-linux-gnueabihf g++-arm-linux-gnueabihf
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@v3.1.0
with:
distribution: goreleaser
workdir: './cli'
version: v1.18.2
args: release --clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Upload to S3
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"
run: |
cd cli/output && for file in *.tar.gz; do
aws s3 cp "$file" s3://terminus-os-install/$file --acl=public-read
# coscmd upload $file /$file
done

58
.github/workflows/release-daemon.yaml vendored Normal file
View File

@@ -0,0 +1,58 @@
name: Release Daemon
on:
workflow_call:
inputs:
version:
type: string
required: true
ref:
type: string
workflow_dispatch:
jobs:
goreleaser:
runs-on: ubuntu-24.04
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 1
ref: ${{ inputs.ref }}
- name: Add Local Git Tag For GoReleaser
run: git tag ${{ inputs.version }}
continue-on-error: true
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.22.1
- name: install udev-devel
run: |
sudo apt install -y libudev-dev
- name: Install x86_64 cross-compiler
run: sudo apt-get update && sudo apt-get install -y build-essential
- name: Install ARM cross-compiler
run: sudo apt-get update && sudo apt-get install -y gcc-aarch64-linux-gnu
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@v3.1.0
with:
distribution: goreleaser
workdir: './daemon'
version: v1.18.2
args: release --clean
- name: Upload to CDN
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: 'us-east-1'
run: |
cd daemon/output && for file in *.tar.gz; do
aws s3 cp "$file" s3://terminus-os-install/$file --acl=public-read
done

View File

@@ -9,6 +9,31 @@ on:
workflow_dispatch:
jobs:
daily-version:
runs-on: ubuntu-latest
outputs:
version: ${{ steps.generate.outputs.version }}
steps:
- id: generate
run: |
v=1.12.0-$(date +"%Y%m%d")
echo "version=$v" >> "$GITHUB_OUTPUT"
release-cli:
needs: daily-version
uses: ./.github/workflows/release-cli.yaml
secrets: inherit
with:
version: ${{ needs.daily-version.outputs.version }}
release-daemon:
needs: daily-version
uses: ./.github/workflows/release-daemon.yaml
secrets: inherit
with:
version: ${{ needs.daily-version.outputs.version }}
push-images:
runs-on: ubuntu-22.04
@@ -39,6 +64,7 @@ jobs:
bash build/image-manifest.sh && bash build/upload-images.sh .manifest/images.mf linux/arm64
push-deps:
needs: [daily-version, release-daemon]
runs-on: ubuntu-latest
steps:
@@ -50,10 +76,12 @@ jobs:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"
VERSION: ${{ needs.daily-version.outputs.version }}
run: |
bash build/deps-manifest.sh && bash build/upload-deps.sh
push-deps-arm64:
needs: [daily-version, release-daemon]
runs-on: [self-hosted, linux, ARM64]
steps:
@@ -65,86 +93,78 @@ jobs:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"
VERSION: ${{ needs.daily-version.outputs.version }}
run: |
export PATH=$PATH:/usr/local/bin:/home/ubuntu/.local/bin
bash build/deps-manifest.sh linux/arm64 && bash build/upload-deps.sh linux/arm64
upload-package:
needs: [push-images, push-images-arm64, push-deps, push-deps-arm64]
needs: [daily-version, push-images, push-images-arm64, push-deps, push-deps-arm64]
runs-on: ubuntu-latest
outputs:
md5sum: ${{ steps.upload.outputs.md5sum }}
steps:
- name: 'Daily tag version'
id: vars
run: |
v=1.12.0-$(date +"%Y%m%d")
echo "tag_version=$v" >> $GITHUB_OUTPUT
- name: 'Checkout source code'
uses: actions/checkout@v3
- name: Package installer
run: |
bash build/build.sh ${{ steps.vars.outputs.tag_version }}
bash build/build.sh ${{ needs.daily-version.outputs.version }}
- name: Upload to S3
id: upload
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: 'us-east-1'
run: |
md5sum install-wizard-v${{ steps.vars.outputs.tag_version }}.tar.gz > install-wizard-v${{ steps.vars.outputs.tag_version }}.md5sum.txt && \
aws s3 cp install-wizard-v${{ steps.vars.outputs.tag_version }}.md5sum.txt s3://terminus-os-install/install-wizard-v${{ steps.vars.outputs.tag_version }}.md5sum.txt --acl=public-read && \
aws s3 cp install-wizard-v${{ steps.vars.outputs.tag_version }}.tar.gz s3://terminus-os-install/install-wizard-v${{ steps.vars.outputs.tag_version }}.tar.gz --acl=public-read
md5sum install-wizard-v${{ needs.daily-version.outputs.version }}.tar.gz > install-wizard-v${{ needs.daily-version.outputs.version }}.md5sum.txt && \
aws s3 cp install-wizard-v${{ needs.daily-version.outputs.version }}.md5sum.txt s3://terminus-os-install/install-wizard-v${{ needs.daily-version.outputs.version }}.md5sum.txt --acl=public-read && \
aws s3 cp install-wizard-v${{ needs.daily-version.outputs.version }}.tar.gz s3://terminus-os-install/install-wizard-v${{ needs.daily-version.outputs.version }}.tar.gz --acl=public-read && \
echo "md5sum=$(awk '{print $1}' install-wizard-v${{ needs.daily-version.outputs.version }}.md5sum.txt)" >> "$GITHUB_OUTPUT"
release:
needs: [upload-package]
needs: [daily-version, upload-package]
runs-on: ubuntu-latest
steps:
- name: 'Checkout source code'
uses: actions/checkout@v3
- name: 'Daily tag version'
id: vars
run: |
v=1.12.0-$(date +"%Y%m%d")
echo "tag_version=$v" >> $GITHUB_OUTPUT
echo "version_md5sum=$(curl -sSfL https://dc3p1870nn3cj.cloudfront.net/install-wizard-v${v}.md5sum.txt|awk '{print $1}')" >> $GITHUB_OUTPUT
- name: Update checksum
uses: eball/write-tag-to-version-file@latest
with:
filename: 'build/installer/install.sh'
filename: 'build/base-package/install.sh'
placeholder: '#__MD5SUM__'
tag: ${{ steps.vars.outputs.version_md5sum }}
tag: ${{ needs.upload-package.outputs.md5sum }}
- name: Package installer
run: |
bash build/build.sh ${{ steps.vars.outputs.tag_version }}
bash build/build.sh ${{ needs.daily-version.outputs.version }}
- name: 'Archives'
run: |
cp .dist/install-wizard/install.sh build/installer
cp build/installer/install.sh build/installer/publicInstaller.sh
cp .dist/install-wizard/install.ps1 build/installer
cp .dist/install-wizard/install.sh build/base-package
cp build/base-package/install.sh build/base-package/publicInstaller.sh
cp .dist/install-wizard/install.ps1 build/base-package
- name: Release public files
uses: softprops/action-gh-release@v1
with:
name: v${{ steps.vars.outputs.tag_version }} Release
tag_name: ${{ steps.vars.outputs.tag_version }}
name: v${{ needs.daily-version.outputs.version }} Release
tag_name: ${{ needs.daily-version.outputs.version }}
files: |
install-wizard-v${{ steps.vars.outputs.tag_version }}.tar.gz
build/installer/publicInstaller.sh
build/installer/install.sh
build/installer/install.ps1
build/installer/joincluster.sh
build/installer/publicAddnode.sh
build/installer/version.hint
build/installer/publicRestoreInstaller.sh
install-wizard-v${{ needs.daily-version.outputs.version }}.tar.gz
build/base-package/publicInstaller.sh
build/base-package/install.sh
build/base-package/install.ps1
build/base-package/joincluster.sh
build/base-package/publicAddnode.sh
build/base-package/version.hint
build/base-package/publicRestoreInstaller.sh
prerelease: true
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -0,0 +1,71 @@
name: Publish mdns-agent to Dockerhub
on:
workflow_dispatch:
inputs:
version:
type: string
required: true
jobs:
update_dockerhub:
runs-on: ubuntu-latest
steps:
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASS }}
- name: Build and push Docker image
uses: docker/build-push-action@v3
with:
push: true
context: ./daemon
tags: beclab/olaresd:${{ inputs.version }}
file: ./daemon/docker/Dockerfile.agent
platforms: linux/amd64,linux/arm64
upload_release_package:
runs-on: ubuntu-24.04
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 1
- name: Add Local Git Tag For GoReleaser
run: git tag ${{ inputs.version }}
continue-on-error: true
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.22.1
- name: Install x86_64 cross-compiler
run: sudo apt-get update && sudo apt-get install -y build-essential
- name: Install ARM cross-compiler
run: sudo apt-get update && sudo apt-get install -y gcc-aarch64-linux-gnu
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@v3.1.0
with:
distribution: goreleaser
version: v1.18.2
args: release --clean --skip-validate -f .goreleaser.agent.yml
workdir: './daemon'
- name: Upload to CDN
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: 'us-east-1'
run: |
cd daemon/output && for file in *.tar.gz; do
aws s3 cp "$file" s3://terminus-os-install/$file --acl=public-read
done

View File

@@ -9,6 +9,21 @@ on:
description: 'Release Tags'
jobs:
release-cli:
uses: ./.github/workflows/release-cli.yaml
secrets: inherit
with:
version: ${{ github.event.inputs.tags }}
ref: ${{ github.event.inputs.tags }}
release-daemon:
uses: ./.github/workflows/release-daemon.yaml
secrets: inherit
with:
version: ${{ github.event.inputs.tags }}
ref: ${{ github.event.inputs.tags }}
push:
runs-on: ubuntu-22.04
@@ -22,6 +37,7 @@ jobs:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: 'us-east-1'
VERSION: ${{ github.event.inputs.tags }}
run: |
bash build/image-manifest.sh && bash build/upload-images.sh .manifest/images.mf
@@ -38,12 +54,13 @@ jobs:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: 'us-east-1'
VERSION: ${{ github.event.inputs.tags }}
run: |
export PATH=$PATH:/usr/local/bin:/home/ubuntu/.local/bin
bash build/image-manifest.sh && bash build/upload-images.sh .manifest/images.mf linux/arm64
upload-package:
needs: [push, push-arm64]
needs: [push, push-arm64, release-daemon]
runs-on: ubuntu-latest
steps:
@@ -77,7 +94,7 @@ jobs:
ref: ${{ github.event.inputs.tags }}
- name: Update env
working-directory: ./build/installer
working-directory: ./build/base-package
run: |
echo 'DEBUG_VERSION="false"' > .env
@@ -89,7 +106,7 @@ jobs:
- name: Update checksum
uses: eball/write-tag-to-version-file@latest
with:
filename: 'build/installer/install.sh'
filename: 'build/base-package/install.sh'
placeholder: '#__MD5SUM__'
tag: ${{ steps.vars.outputs.version_md5sum }}
@@ -99,11 +116,11 @@ jobs:
- name: 'Archives'
run: |
cp .dist/install-wizard/install.sh build/installer
cp build/installer/install.sh build/installer/publicInstaller.sh
cp build/installer/install.sh build/installer/publicInstaller.latest
cp .dist/install-wizard/install.ps1 build/installer
cp build/installer/install.ps1 build/installer/publicInstaller.latest.ps1
cp .dist/install-wizard/install.sh build/base-package
cp build/base-package/install.sh build/base-package/publicInstaller.sh
cp build/base-package/install.sh build/base-package/publicInstaller.latest
cp .dist/install-wizard/install.ps1 build/insbase-packagetaller
cp build/base-package/install.ps1 build/base-package/publicInstaller.latest.ps1
- name: Release public files
uses: softprops/action-gh-release@v1
@@ -112,15 +129,15 @@ jobs:
tag_name: ${{ github.event.inputs.tags }}
files: |
install-wizard-v${{ github.event.inputs.tags }}.tar.gz
build/installer/publicInstaller.sh
build/installer/publicInstaller.latest
build/installer/install.sh
build/installer/publicInstaller.latest.ps1
build/installer/install.ps1
build/installer/publicAddnode.sh
build/installer/joincluster.sh
build/installer/version.hint
build/installer/publicRestoreInstaller.sh
build/base-package/publicInstaller.sh
build/base-package/publicInstaller.latest
build/base-package/install.sh
build/base-package/publicInstaller.latest.ps1
build/base-package/install.ps1
build/base-package/publicAddnode.sh
build/instbase-packagealler/joincluster.sh
build/base-package/version.hint
build/base-package/publicRestoreInstaller.sh
prerelease: true
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

4
.gitignore vendored
View File

@@ -28,4 +28,6 @@ install-wizard-*.tar.gz
olares-cli-*.tar.gz
!ks-console-*.tgz
.vscode
.DS_Store
.DS_Store
cli/output
daemon/output

183
README.md
View File

@@ -1,6 +1,6 @@
<div align="center">
# Olares: An Open-Source Sovereign Cloud OS for Local AI<!-- omit in toc -->
# Olares: An Open-Source Personal Cloud to </br>Reclaim Your Data<!-- omit in toc -->
[![Mission](https://img.shields.io/badge/Mission-Let%20people%20own%20their%20data%20again-purple)](#)<br/>
[![Last Commit](https://img.shields.io/github/last-commit/beclab/olares)](https://github.com/beclab/olares/commits/main)
@@ -18,33 +18,66 @@
</div>
https://github.com/user-attachments/assets/3089a524-c135-4f96-ad2b-c66bf4ee7471
*Build your local AI assistants, sync data across places, self-host your workspace, stream your own media, and more—all in your sovereign cloud made possible by Olares.*
<p align="center">
<a href="https://olares.xyz">Website</a> ·
<a href="https://docs.olares.xyz">Documentation</a> ·
<a href="https://olares.xyz/larepass">Download LarePass</a> ·
<a href="https://olares.com">Website</a> ·
<a href="https://docs.olares.com">Documentation</a> ·
<a href="https://olares.com/larepass">Download LarePass</a> ·
<a href="https://github.com/beclab/apps">Olares Apps</a> ·
<a href="https://space.olares.xyz">Olares Space</a>
<a href="https://space.olares.com">Olares Space</a>
</p>
> [!IMPORTANT]
> We just finished our rebranding from Terminus to Olares recently. For more information, refer to our [rebranding blog](https://blog.olares.xyz/terminus-is-now-olares/).
>*The modern internet built on public clouds is increasingly threatening your personal data privacy. As reliance on services like ChatGPT, Midjourney, and Facebook grows, so does the risk to your digital autonomy. Your data lives on their servers, subject to their terms, tracking, and potential censorship.*
>
>*It's time for a change.*
Convert your hardware into an AI home server with Olares, an open-source sovereign cloud OS built for local AI.
- **Run leading AI models on your term**s: Effortlessly host powerful open AI models like LLaMA, Stable Diffusion, Whisper, and Flux.1 directly on your hardware, giving you full control over your AI environment.
- **Deploy with ease**: Discover and install a wide range of open-source AI apps from Olares Market in a few clicks. No more complicated configuration or setup.
- **Access anytime, anywhere**: Access your AI apps and models through a browser whenever and wherever you need them.
- **Integrated AI for smarter AI experience**: Using a [Model Context Protocol](https://spec.modelcontextprotocol.io/specification/) (MCP)-like mechanism, Olares seamlessly connects AI models with AI apps and your private data sets. This creates highly personalized, context-aware AI interactions that adapt to your needs.
![Personal Cloud](https://file.bttcdn.com/github/olares/public-cloud-to-personal-cloud.jpg)
We believe you have a fundamental right to control your digital life. The most effective way to uphold this right is by hosting your data locally, on your own hardware.
Olares is an **open-source personal cloud operating system** designed to empower you to own and manage your digital assets locally. Instead of relying on public cloud services, you can deploy powerful open-source alternatives locally on Olares, such as Ollama for hosting LLMs, SD WebUI for image generation, and Mastodon for building censor free social space. Imagine the power of the cloud, but with you in complete command.
> 🌟 *Star us to receive instant notifications about new releases and updates.*
## Why Olares?
## Architecture
Just as Public clouds offer IaaS, PaaS, and SaaS layers, Olares provides open-source alternatives to each of these layers.
![Tech Stacks](https://file.bttcdn.com/github/olares/olares-architecture.jpg)
For detailed description of each component, refer to [Olares architecture](https://docs.olares.com/manual/system-architecture.html).
> 🔍 **How is Olares different from traditional NAS?**
>
> Olares focuses on building an all-in-one self-hosted personal cloud experience. Its core features and target users differ significantly from traditional Network Attached Storage (NAS) systems, which primarily focus on network storage. For more details, see [Compare Olares and NAS](https://docs.olares.com/manual/olares-vs-nas.html).
## Features
Olares offers a wide array of features designed to enhance security, ease of use, and development flexibility:
- **Enterprise-grade security**: Simplified network configuration using Tailscale, Headscale, Cloudflare Tunnel, and FRP.
- **Secure and permissionless application ecosystem**: Sandboxing ensures application isolation and security.
- **Unified file system and database**: Automated scaling, backups, and high availability.
- **Single sign-on**: Log in once to access all applications within Olares with a shared authentication service.
- **AI capabilities**: Comprehensive solution for GPU management, local AI model hosting, and private knowledge bases while maintaining data privacy.
- **Built-in applications**: Includes file manager, sync drive, vault, reader, app market, settings, and dashboard.
- **Seamless anywhere access**: Access your devices from anywhere using dedicated clients for mobile, desktop, and browsers.
- **Development tools**: Comprehensive development tools for effortless application development and porting.
Here are some screenshots from the UI for a sneak peek:
| **DesktopStreamlined and familiar portal** | **FilesA secure home to your data**
| :--------: | :-------: |
| ![Desktop](https://file.bttcdn.com/github/terminus/v2/desktop.jpg) | ![Files](https://file.bttcdn.com/github/terminus/v2/files.jpg) |
| **Vault1Password alternative**|**MarketApp ecosystem in your control** |
| ![vault](https://file.bttcdn.com/github/terminus/v2/vault.jpg) | ![market](https://file.bttcdn.com/github/terminus/v2/market.jpg) |
|**WiseYour digital secret garden** | **SettingsManage Olares efficiently** |
| ![settings](https://file.bttcdn.com/github/terminus/v2/wise.jpg) | ![](https://file.bttcdn.com/github/terminus/v2/settings.jpg) |
|**DashboardConstant system monitoring** | **ProfileYour unique homepage** |
| ![dashboard](https://file.bttcdn.com/github/terminus/v2/dashboard.jpg) | ![profile](https://file.bttcdn.com/github/terminus/v2/profile.jpg) |
| **StudioDevelop, debug, and deploy**|**Control HubManage Kubernetes clusters easily** |
| ![Studio](https://file.bttcdn.com/github/terminus/v2/devbox.jpg) | ![Controlhub](https://file.bttcdn.com/github/terminus/v2/controlhub.jpg)|
## Key use cases
Here is why and where you can count on Olares for private, powerful, and secure sovereign cloud experience:
@@ -68,121 +101,39 @@ Here is why and where you can count on Olares for private, powerful, and secure
Olares has been tested and verified on the following Linux platforms:
- Ubuntu 20.04 LTS or later
- Ubuntu 24.04 LTS or later
- Debian 11 or later
> **Other installation options**
> Olares can also be installed on other platforms like macOS, Windows, PVE, and Raspberry Pi, or installed via docker compose on Linux. However, these are only for **testing and development purposes**. For detailed instructions, visit [Additional installation options](https://docs.olares.xyz/developer/install/additional-installations.html).
### Set up Olares
To get started with Olares on your own device, follow the [Getting Started Guide](https://docs.olares.xyz/manual/get-started/) for step-by-step instructions.
## Architecture
Olares' architecture is based on two core principles:
- Adopts an Android-like approach to control software permissions and interactivity, ensuring smooth and secure system operations.
- Leverages cloud-native technologies to manage hardware and middleware services efficiently.
![Olares Architecture](https://file.bttcdn.com/github/terminus/v2/olares-arch-3.png)
For detailed description of each component, refer to [Olares architecture](https://docs.olares.xyz/manual/system-architecture.html).
## Features
Olares offers a wide array of features designed to enhance security, ease of use, and development flexibility:
- **Enterprise-grade security**: Simplified network configuration using Tailscale, Headscale, Cloudflare Tunnel, and FRP.
- **Secure and permissionless application ecosystem**: Sandboxing ensures application isolation and security.
- **Unified file system and database**: Automated scaling, backups, and high availability.
- **Single sign-on**: Log in once to access all applications within Olares with a shared authentication service.
- **AI capabilities**: Comprehensive solution for GPU management, local AI model hosting, and private knowledge bases while maintaining data privacy.
- **Built-in applications**: Includes file manager, sync drive, vault, reader, app market, settings, and dashboard.
- **Seamless anywhere access**: Access your devices from anywhere using dedicated clients for mobile, desktop, and browsers.
- **Development tools**: Comprehensive development tools for effortless application development and porting.
To get started with Olares on your own device, follow the [Getting Started Guide](https://docs.olares.com/manual/get-started/) for step-by-step instructions.
## Project navigation
Olares consists of numerous code repositories publicly available on GitHub. The current repository is responsible for the final compilation, packaging, installation, and upgrade of the operating system, while specific changes mostly take place in their corresponding repositories.
> [!NOTE]
> We are currently consolidating Olares subproject code into this repository. This process may take a few months. Once finished, you will get a comprehensive view of the entire Olares system here.
The following table lists the project directories under Olares and their corresponding repositories. Find the one that interests you:
<details>
<summary><b>Framework components</b></summary>
| Directory | Repository | Description |
| --- | --- | --- |
| [frameworks/app-service](https://github.com/beclab/olares/tree/main/frameworks/app-service) | <https://github.com/beclab/app-service> | A system framework component that provides lifecycle management and various security controls for all apps in the system. |
| [frameworks/backup-server](https://github.com/beclab/olares/tree/main/frameworks/backup-server) | <https://github.com/beclab/backup-server> | A system framework component that provides scheduled full or incremental cluster backup services. |
| [frameworks/bfl](https://github.com/beclab/olares/tree/main/frameworks/bfl) | <https://github.com/beclab/bfl> | Backend For Launcher (BFL), a system framework component serving as the user access point and aggregating and proxying interfaces of various backend services. |
| [frameworks/GPU](https://github.com/beclab/olares/tree/main/frameworks/GPU) | <https://github.com/grgalex/nvshare> | GPU sharing mechanism that allows multiple processes (or containers running on Kubernetes) to securely run on the same physical GPU concurrently, each having the whole GPU memory available. |
| [frameworks/l4-bfl-proxy](https://github.com/beclab/olares/tree/main/frameworks/l4-bfl-proxy) | <https://github.com/beclab/l4-bfl-proxy> | Layer 4 network proxy for BFL. By prereading SNI, it provides a dynamic route to pass through into the user's Ingress. |
| [frameworks/osnode-init](https://github.com/beclab/olares/tree/main/frameworks/osnode-init) | <https://github.com/beclab/osnode-init> | A system framework component that initializes node data when a new node joins the cluster. |
| [frameworks/system-server](https://github.com/beclab/olares/tree/main/frameworks/system-server) | <https://github.com/beclab/system-server> | As a part of system runtime frameworks, it provides a mechanism for security calls between apps. |
| [frameworks/tapr](https://github.com/beclab/olares/tree/main/frameworks/tapr) | <https://github.com/beclab/tapr> | Olares Application Runtime components. |
</details>
This section lists the main directories in the Olares repository:
<details>
<summary><b>System-Level Applications and Services</b></summary>
| Directory | Repository | Description |
| --- | --- | --- |
| [apps/analytic](https://github.com/beclab/olares/tree/main/apps/analytic) | <https://github.com/beclab/analytic> | Developed based on [Umami](https://github.com/umami-software/umami), Analytic is a simple, fast, privacy-focused alternative to Google Analytics. |
| [apps/market](https://github.com/beclab/olares/tree/main/apps/market) | <https://github.com/beclab/market> | This repository deploys the front-end part of the application market in Olares. |
| [apps/market-server](https://github.com/beclab/olares/tree/main/apps/market-server) | <https://github.com/beclab/market> | This repository deploys the back-end part of the application market in Olares. |
| [apps/argo](https://github.com/beclab/olares/tree/main/apps/argo) | <https://github.com/argoproj/argo-workflows> | A workflow engine for orchestrating container execution of local recommendation algorithms. |
| [apps/desktop](https://github.com/beclab/olares/tree/main/apps/desktop) | <https://github.com/beclab/desktop> | The built-in desktop application of the system. |
| [apps/devbox](https://github.com/beclab/olares/tree/main/apps/devbox) | <https://github.com/beclab/devbox> | An IDE for developers to port and develop Olares applications. |
| [apps/vault](https://github.com/beclab/olares/tree/main/apps/vault) | <https://github.com/beclab/termipass> | A free alternative to 1Password and Bitwarden for teams and enterprises of any size Developed based on [Padloc](https://github.com/padloc/padloc). It serves as the client that helps you manage DID, Olares ID, and Olares devices. |
| [apps/files](https://github.com/beclab/olares/tree/main/apps/files) | <https://github.com/beclab/files> | A built-in file manager modified from [Filebrowser](https://github.com/filebrowser/filebrowser), providing management of files on Drive, Sync, and various Olares physical nodes. |
| [apps/notifications](https://github.com/beclab/olares/tree/main/apps/notifications) | <https://github.com/beclab/notifications> | The notifications system of Olares |
| [apps/profile](https://github.com/beclab/olares/tree/main/apps/profile) | <https://github.com/beclab/profile> | Linktree alternative in Olares|
| [apps/rsshub](https://github.com/beclab/olares/tree/main/apps/rsshub) | <https://github.com/beclab/rsshub> | A RSS subscription manager based on [RssHub](https://github.com/DIYgod/RSSHub). |
| [apps/settings](https://github.com/beclab/olares/tree/main/apps/settings) | <https://github.com/beclab/settings> | Built-in system settings. |
| [apps/system-apps](https://github.com/beclab/olares/tree/main/apps/system-apps) | <https://github.com/beclab/system-apps> | Built based on the _kubesphere/console_ project, system-service provides a self-hosted cloud platform that helps users understand and control the system's runtime status and resource usage through a visual Dashboard and feature-rich ControlHub. |
| [apps/wizard](https://github.com/beclab/olares/tree/main/apps/wizard) | <https://github.com/beclab/wizard> | A wizard application to walk users through the system activation process. |
</details>
<details>
<summary><b>Third-party Components and Services</b></summary>
| Directory | Repository | Description |
| --- | --- | --- |
| [third-party/authelia](https://github.com/beclab/olares/tree/main/third-party/authelia) | <https://github.com/beclab/authelia> | An open-source authentication and authorization server providing two-factor authentication and single sign-on (SSO) for your applications via a web portal. |
| [third-party/headscale](https://github.com/beclab/olares/tree/main/third-party/headscale) | <https://github.com/beclab/headscale> | An open source, self-hosted implementation of the Tailscale control server in Olares to manage Tailscale in LarePass across different devices. |
| [third-party/infisical](https://github.com/beclab/olares/tree/main/third-party/infisical) | <https://github.com/beclab/infisical> | An open-source secret management platform that syncs secrets across your teams/infrastructure and prevents secret leaks. |
| [third-party/juicefs](https://github.com/beclab/olares/tree/main/third-party/juicefs) | <https://github.com/beclab/juicefs-ext> | A distributed POSIX file system built on top of Redis and S3, allowing apps on different nodes to access the same data via POSIX interface. |
| [third-party/ks-console](https://github.com/beclab/olares/tree/main/third-party/ks-console) | <https://github.com/kubesphere/console> | Kubesphere console that allows for cluster management via a Web GUI. |
| [third-party/ks-installer](https://github.com/beclab/olares/tree/main/third-party/ks-installer) | <https://github.com/beclab/ks-installer-ext> | Kubesphere installer component that automatically creates Kubesphere clusters based on cluster resource definitions. |
| [third-party/kube-state-metrics](https://github.com/beclab/olares/tree/main/third-party/kube-state-metrics) | <https://github.com/beclab/kube-state-metrics> | kube-state-metrics (KSM) is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects. |
| [third-party/notification-manager](https://github.com/beclab/olares/tree/main/third-party/notification-manager) | <https://github.com/beclab/notification-manager-ext> | Kubesphere's notification management component for unified management of multiple notification channels and custom aggregation of notification content. |
| [third-party/predixy](https://github.com/beclab/olares/tree/main/third-party/predixy) | <https://github.com/beclab/predixy> | Redis cluster proxy service that automatically identifies available nodes and adds namespace isolation. |
| [third-party/redis-cluster-operator](https://github.com/beclab/olares/tree/main/third-party/redis-cluster-operator) | <https://github.com/beclab/redis-cluster-operator> | A cloud-native tool for creating and managing Redis clusters based on Kubernetes. |
| [third-party/seafile-server](https://github.com/beclab/olares/tree/main/third-party/seafile-server) | <https://github.com/beclab/seafile-server> | The backend service of Seafile (Sync Drive) for handling data storage. |
| [third-party/seahub](https://github.com/beclab/olares/tree/main/third-party/seahub) | <https://github.com/beclab/seahub> | The front-end and middleware service of Seafile (Sync Drive) for handling file sharing, data synchronization, etc. |
| [third-party/tailscale](https://github.com/beclab/olares/tree/main/third-party/tailscale) | <https://github.com/tailscale/tailscale> | Tailscale has been integrated in LarePass of all platforms. |
</details>
<details>
<summary><b>Additional libraries and components</b></summary>
| Directory | Repository | Description |
| --- | --- | --- |
| [build/installer](https://github.com/beclab/olares/tree/main/build/installer) | | The template for generating the installer build. |
| [build/manifest](https://github.com/beclab/olares/tree/main/build/manifest) | | Installation build image list template. |
| [libs/fs-lib](https://github.com/beclab/olares/tree/main/libs) | <https://github.com/beclab/fs-lib> | The SDK library for the iNotify-compatible interface implemented based on JuiceFS. |
| [scripts](https://github.com/beclab/olares/tree/main/scripts) | | Assisting scripts for generating the installer build. |
</details>
* **`apps`**: Contains the code for system applications, primarily for `larepass`.
* **`cli`**: Contains the code for `olares-cli`, the command-line interface tool for Olares.
* **`daemon`**: Contains the code for `olaresd`, the system daemon process.
* **`docs`**: Contains documentation for the project.
* **`framework`**: Contains the Olares system services.
* **`infrastructure`**: Contains code related to infrastructure components such as computing, storage, networking, and GPUs.
* **`platform`**: Contains code for cloud-native components like databases and message queues.
* **`vendor`**: Contains code from third-party hardware vendors.
## Contributing to Olares
We are welcoming contributions in any form:
- If you want to develop your own applications on Olares, refer to:<br>
https://docs.olares.xyz/developer/develop/
https://docs.olares.com/developer/develop/
- If you want to help improve Olares, refer to:<br>
https://docs.olares.xyz/developer/contribute/olares.html
https://docs.olares.com/developer/contribute/olares.html
## Community & contact

View File

@@ -1,6 +1,6 @@
<div align="center">
# Olares - 为本地 AI 打造的开源私有云操作系统<!-- omit in toc -->
# Olares:助您重获数据主权的开源个人云
[![Mission](https://img.shields.io/badge/Mission-Let%20people%20own%20their%20data%20again-purple)](#)<br/>
[![Last Commit](https://img.shields.io/github/last-commit/beclab/terminus)](https://github.com/beclab/olares/commits/main)
@@ -18,30 +18,67 @@
</div>
https://github.com/user-attachments/assets/3089a524-c135-4f96-ad2b-c66bf4ee7471
*Olares 让你体验更多可能:构建个人 AI 助理、随时随地同步数据、自托管团队协作空间、打造私人影视厅——无缝整合你的数字生活。*
<p align="center">
<a href="https://olares.xyz">网站</a> ·
<a href="https://docs.olares.xyz">文档</a> ·
<a href="https://docs.olares.xyz/larepass">下载 LarePass</a> ·
<a href="https://olares.com">网站</a> ·
<a href="https://docs.olares.com">文档</a> ·
<a href="https://olares.com/larepass">下载 LarePass</a> ·
<a href="https://github.com/beclab/apps">Olares 应用</a> ·
<a href="https://space.olares.xyz">Olares Space</a>
<a href="https://space.olares.com">Olares Space</a>
</p>
## 介绍
> *基于公有云构建的现代互联网日益威胁着您的个人数据隐私。随着您对 ChatGPT、Midjourney 和脸书等服务的依赖加深,您对数字自主权的掌控也在减弱。您的数据存储在他人服务器上,受其条款约束,被追踪并审查。*
>
> *是时候做出改变了。*
Olares 是为本地端侧 AI 打造的开源私有云操作系统,可轻松将您的硬件转变为 AI 家庭服务器。
- 运行领先 AI 模型:在您的硬件上轻松部署并掌控 LLaMA、Stable Diffusion、Whisper 和 Flux.1 等顶尖开源 AI 模型。
- 轻松部署 AI 应用:通过 Olares 应用市场,轻松部署丰富多样的开源 AI 应用。无需复杂繁琐的配置
- 随心访问:通过浏览器随时随地访问你的 AI 应用。
- 更智能的专属 AI 体验:通过类似[模型上下文协议](https://spec.modelcontextprotocol.io/specification/)Model Context Protocol, MCP的机制Olares 可让 AI 模型无缝连接 AI 应用与您的私人数据集,提供基于任务场景的个性化 AI 体验
![个人云](https://file.bttcdn.com/github/olares/public-cloud-to-personal-cloud.jpg)
我们坚信,**您拥有掌控自己数字生活的基本权利**。维护这一权利最有效的方式,就是将您的数据托管在本地,在您自己的硬件上
Olares 是一款开源个人云操作系统,旨在让您能够轻松在本地拥有并管理自己的数字资产。您无需再依赖公有云服务,而可以在 Olares 上本地部署强大的开源平替服务或应用,例如可以使用 Ollama 托管大语言模型,使用 SD WebUI 用于图像生成,以及使用 Mastodon 构建不受审查的社交空间。Olares 让你坐拥云计算的强大威力,又能完全将其置于自己掌控之下
> 为 Olares 点亮 🌟 以及时获取新版本和更新的通知。
## 为什么选择 Olares?
## 系统架构
公有云具有基础设施即服务IaaS、平台即服务PaaS和软件即服务SaaS等层级。Olares 为这些层级提供了开源替代方案。
![技术栈](https://file.bttcdn.com/github/olares/olares-architecture.jpg)
详细描述请参考 [Olares 架构](https://docs.olares.cn/zh/manual/system-architecture.html)文档。
>🔍**Olares 和 NAS 有什么不同?**
>
> Olares 致力于打造一站式的自托管个人云体验。其核心功能与用户定位,均与专注于网络存储的传统 NAS 有着显著的不同,详情请参考 [Olares 与 NAS 对比](https://docs.olares.com/zh/manual/olares-vs-nas.html)。
## 功能特性
Olares 提供了一系列功能,旨在提升安全性、使用便捷性以及开发的灵活性:
- **企业级安全**:使用 Tailscale、Headscale、Cloudflare Tunnel 和 FRP 简化网络配置,确保安全连接。
- **安全且无需许可的应用生态系统**:应用通过沙箱化技术实现隔离,保障应用运行的安全性。
- **统一文件系统和数据库**:提供自动扩展、数据备份和高可用性功能,确保数据的持久安全。
- **单点登录**:用户仅需一次登录,即可访问 Olares 中所有应用的共享认证服务。
- **AI 功能**:包括全面的 GPU 管理、本地 AI 模型托管及私有知识库,同时严格保护数据隐私。
- **内置应用程序**:涵盖文件管理器、同步驱动器、密钥管理器、阅读器、应用市场、设置和面板等,提供全面的应用支持。
- **无缝访问**:通过移动端、桌面端和网页浏览器客户端,从全球任何地方访问设备。
- **开发工具**:提供全面的工具支持,便于开发和移植应用,加速开发进程。
以下是用户界面的一些截图预览:
| **桌面:熟悉高效的访问入口** | **文件管理器:安全存储数据**
| :--------: | :-------: |
| ![桌面](https://file.bttcdn.com/github/terminus/v2/desktop.jpg) | ![文件](https://file.bttcdn.com/github/terminus/v2/files.jpg) |
| **Vault密码无忧管理**|**市场:可控的应用生态系统** |
| ![vault](https://file.bttcdn.com/github/terminus/v2/vault.jpg) | ![市场](https://file.bttcdn.com/github/terminus/v2/market.jpg) |
|**Wise数字后花园** | **设置:高效管理 Olares** |
| ![设置](https://file.bttcdn.com/github/terminus/v2/wise.jpg) | ![](https://file.bttcdn.com/github/terminus/v2/settings.jpg) |
|**仪表盘:持续监控 Olares** | **Profile独特的个人主页** |
| ![面板](https://file.bttcdn.com/github/terminus/v2/dashboard.jpg) | ![profile](https://file.bttcdn.com/github/terminus/v2/profile.jpg) |
| **Studio一站式开发、调试和部署**|**控制面板:轻松管理 Kubernetes 集群** |
| ![Devbox](https://file.bttcdn.com/github/terminus/v2/devbox.jpg) | ![控制中心](https://file.bttcdn.com/github/terminus/v2/controlhub.jpg)|
## 使用场景
在以下场景中Olares 为您带来私密、强大且安全的私有云体验:
@@ -65,122 +102,39 @@ Olares 是为本地端侧 AI 打造的开源私有云操作系统,可轻松将
Olares 已在以下 Linux 平台完成测试与验证:
- Ubuntu 20.04 LTS 及以上版本
- Ubuntu 24.04 LTS 及以上版本
- Debian 11 及以上版本
> **其他安装方式**
> Olares 也支持在 macOS、Windows、PVE、树莓派等平台上运行或通过 Docker Compose 在 Linux 上部署。但请注意,这些方式**仅适用于开发和测试环境**。详细安装指南请参阅[其他安装方式](https://docs.joinolares.cn/zh/developer/install/additional-installations.html)。
### 安装 Olares
参考[快速上手指南](https://docs.joinolares.cn/zh/manual/get-started/)安装并激活 Olares。
## 系统架构
Olares 的架构设计遵循两个核心原则:
- 参考 Android 模式,控制软件权限和交互性,确保系统的流畅性和安全性。
- 借鉴云原生技术,高效管理硬件和中间件服务。
![架构](https://file.bttcdn.com/github/terminus/v2/olares-arch-3.png)
详细描述请参考 [Olares 架构](https://docs.joinolares.cn/zh/manual/system-architecture.html)文档。
## 功能特性
Olares 提供了一系列功能,旨在提升安全性、使用便捷性以及开发的灵活性:
- **企业级安全**:使用 Tailscale、Headscale、Cloudflare Tunnel 和 FRP 简化网络配置,确保安全连接。
- **安全且无需许可的应用生态系统**:应用通过沙箱化技术实现隔离,保障应用运行的安全性。
- **统一文件系统和数据库**:提供自动扩展、数据备份和高可用性功能,确保数据的持久安全。
- **单点登录**:用户仅需一次登录,即可访问 Olares 中所有应用的共享认证服务。
- **AI 功能**:包括全面的 GPU 管理、本地 AI 模型托管及私有知识库,同时严格保护数据隐私。
- **内置应用程序**:涵盖文件管理器、同步驱动器、密钥管理器、阅读器、应用市场、设置和面板等,提供全面的应用支持。
- **无缝访问**:通过移动端、桌面端和网页浏览器客户端,从全球任何地方访问设备。
- **开发工具**:提供全面的工具支持,便于开发和移植应用,加速开发进程。
参考[快速上手指南](https://docs.olares.cn/zh/manual/get-started/)安装并激活 Olares。
## 项目目录
Olares 包含多个在 GitHub 上公开可用的代码仓库。当前仓库负责操作系统的最终编译、打包、安装和升级,而特定的更改主要在各自对应的仓库中进行。
> [!NOTE]
> 我们正将 Olares 子项目的代码移动到当前仓库。此过程可能会持续数月。届时您就可以通过本仓库了解 Olares 系统的全貌。
以下表格列出了 Olares 下的项目目录及其对应的仓库。
Olares 代码库中的主要目录如下:
<details>
<summary><b>框架组件</b></summary>
| 路径 | 仓库 | 说明 |
| --- | --- | --- |
| [frameworks/app-service](https://github.com/beclab/olares/tree/main/frameworks/app-service) | <https://github.com/beclab/app-service> | 系统框架组件,负责提供全系统应用的生命周期管理及多种安全控制。 |
| [frameworks/backup-server](https://github.com/beclab/olares/tree/main/frameworks/backup-server) | <https://github.com/beclab/backup-server> | 系统框架组件,提供定时的全量或增量集群备份服务。 |
| [frameworks/bfl](https://github.com/beclab/olares/tree/main/frameworks/bfl) | <https://github.com/beclab/bfl> | 启动器后端Backend For Launcher, BFL作为用户访问点的系统框架组件整合并代理各种后端服务的接口。 |
| [frameworks/GPU](https://github.com/beclab/olares/tree/main/frameworks/GPU) | <https://github.com/grgalex/nvshare> | GPU共享机制允许多个进程或运行在 Kubernetes 上的容器)安全地同时在同一物理 GPU 上运行,每个进程都可访问全部 GPU 内存。 |
| [frameworks/l4-bfl-proxy](https://github.com/beclab/olares/tree/main/frameworks/l4-bfl-proxy) | <https://github.com/beclab/l4-bfl-proxy> | 针对 BFL 的第4层网络代理。通过预读服务器名称指示SNI提供一条动态路由至用户的 Ingress。 |
| [frameworks/osnode-init](https://github.com/beclab/olares/tree/main/frameworks/osnode-init) | <https://github.com/beclab/osnode-init> | 系统框架组件,用于初始化新节点加入集群时的节点数据。 |
| [frameworks/system-server](https://github.com/beclab/olares/tree/main/frameworks/system-server) | <https://github.com/beclab/system-server> | 作为系统运行时框架的一部分,提供应用间安全通信的机制。 |
| [frameworks/tapr](https://github.com/beclab/olares/tree/main/frameworks/tapr) | <https://github.com/beclab/tapr> | Olares 应用运行时组件。 |
</details>
<details>
<summary><b>系统级应用程序和服务</b></summary>
| 路径 | 仓库 | 说明 |
| --- | --- | --- |
| [apps/analytic](https://github.com/beclab/olares/tree/main/apps/analytic) | <https://github.com/beclab/analytic> | 基于 [Umami](https://github.com/umami-software/umami) 开发的 Analytic是一个简单、快速、注重隐私的 Google Analytics 替代品。 |
| [apps/market](https://github.com/beclab/olares/tree/main/apps/market) | <https://github.com/beclab/market> | 此代码库部署了 Olares 应用市场的前端部分。 |
| [apps/market-server](https://github.com/beclab/olares/tree/main/apps/market-server) | <https://github.com/beclab/market> | 此代码库部署了 Olares 应用市场的后端部分。 |
| [apps/argo](https://github.com/beclab/olares/tree/main/apps/argo) | <https://github.com/argoproj/argo-workflows> | 用于协调本地推荐算法容器执行的工作流引擎。 |
| [apps/desktop](https://github.com/beclab/olares/tree/main/apps/desktop) | <https://github.com/beclab/desktop> | 系统内置的桌面应用程序。 |
| [apps/devbox](https://github.com/beclab/olares/tree/main/apps/devbox) | <https://github.com/beclab/devbox> | 为开发者提供的 IDE用于移植和开发 Olares 应用。 |
| [apps/vault](https://github.com/beclab/olares/tree/main/apps/vault) | <https://github.com/beclab/termipass> | 基于 [Padloc](https://github.com/padloc/padloc) 开发的团队和企业的免费 1Password 和 Bitwarden 替代品,作为客户端帮助您管理 DID、Olares ID和 Olares 设备。 |
| [apps/files](https://github.com/beclab/olares/tree/main/apps/files) | <https://github.com/beclab/files> | 基于 [Filebrowser](https://github.com/filebrowser/filebrowser) 修改的内置文件管理器,管理 Drive、Sync 和各种 Olares 物理节点上的文件。|
| [apps/notifications](https://github.com/beclab/olares/tree/main/apps/notifications) | <https://github.com/beclab/notifications> | Olares 的通知系统。 |
| [apps/profile](https://github.com/beclab/olares/tree/main/apps/profile) | <https://github.com/beclab/profile> | Olares 中的 Linktree 替代品。|
| [apps/rsshub](https://github.com/beclab/olares/tree/main/apps/rsshub) | <https://github.com/beclab/rsshub> | 基于 [RssHub](https://github.com/DIYgod/RSSHub) 的 RSS 订阅管理器。 |
| [apps/settings](https://github.com/beclab/olares/tree/main/apps/settings) | <https://github.com/beclab/settings> | 内置系统设置。 |
| [apps/system-apps](https://github.com/beclab/olares/tree/main/apps/system-apps) | <https://github.com/beclab/system-apps> | 基于 *kubesphere/console* 项目构建的 system-service 提供一个自托管的云平台,通过视觉仪表板和功能丰富的 ControlHub 帮助用户了解和控制系统的运行状态和资源使用。 |
| [apps/wizard](https://github.com/beclab/olares/tree/main/apps/wizard) | <https://github.com/beclab/wizard> | 向用户介绍系统激活过程的向导应用程序。 |
</details>
<details>
<summary><b>第三方组件和服务</b></summary>
| 路径 | 仓库 | 说明 |
| --- | --- | --- |
| [third-party/authelia](https://github.com/beclab/olares/tree/main/third-party/authelia) | <https://github.com/beclab/authelia> | 一个开源的认证和授权服务器通过网络门户为应用程序提供双因素认证和单点登录SSO。 |
| [third-party/headscale](https://github.com/beclab/olares/tree/main/third-party/headscale) | <https://github.com/beclab/headscale> | 在 Olares 中的 Tailscale 控制服务器的开源自托管实现,用于管理 LarePass 中不同设备上的 Tailscale。|
| [third-party/infisical](https://github.com/beclab/olares/tree/main/third-party/infisical) | <https://github.com/beclab/infisical> | 一个开源的密钥管理平台,可以在团队/基础设施之间同步密钥并防止泄露。 |
| [third-party/juicefs](https://github.com/beclab/olares/tree/main/third-party/juicefs) | <https://github.com/beclab/juicefs-ext> | 基于 Redis 和 S3 之上构建的分布式 POSIX 文件系统,允许不同节点上的应用通过 POSIX 接口访问同一数据。 |
| [third-party/ks-console](https://github.com/beclab/olares/tree/main/third-party/ks-console) | <https://github.com/kubesphere/console> | Kubesphere 控制台,允许通过 Web GUI 进行集群管理。 |
| [third-party/ks-installer](https://github.com/beclab/olares/tree/main/third-party/ks-installer) | <https://github.com/beclab/ks-installer-ext> | Kubesphere 安装组件,根据集群资源定义自动创建 Kubesphere 集群。 |
| [third-party/kube-state-metrics](https://github.com/beclab/olares/tree/main/third-party/kube-state-metrics) | <https://github.com/beclab/kube-state-metrics> | kube-state-metricsKSM是一个简单的服务监听 Kubernetes API 服务器并生成关于对象状态的指标。 |
| [third-party/notification-manager](https://github.com/beclab/olares/tree/main/third-party/notification-manager) | <https://github.com/beclab/notification-manager-ext> | Kubesphere 的通知管理组件,用于统一管理多个通知渠道和自定义聚合通知内容。 |
| [third-party/predixy](https://github.com/beclab/olares/tree/main/third-party/predixy) | <https://github.com/beclab/predixy> | Redis 集群代理服务,自动识别可用节点并添加命名空间隔离。 |
| [third-party/redis-cluster-operator](https://github.com/beclab/olares/tree/main/third-party/redis-cluster-operator) | <https://github.com/beclab/redis-cluster-operator> | 一个基于 Kubernetes 的云原生工具,用于创建和管理 Redis 集群。 |
| [third-party/seafile-server](https://github.com/beclab/olares/tree/main/third-party/seafile-server) | <https://github.com/beclab/seafile-server> | Seafile同步驱动器的后端服务用于处理数据存储。 |
| [third-party/seahub](https://github.com/beclab/olares/tree/main/third-party/seahub) | <https://github.com/beclab/seahub> | Seafile同步驱动器的前端和中间件服务用于处理文件共享、数据同步等。 |
| [third-party/tailscale](https://github.com/beclab/olares/tree/main/third-party/tailscale) | <https://github.com/tailscale/tailscale> | Tailscale 已在所有平台的 LarePass 中集成。 |
</details>
<details>
<summary><b>其他库和组件</b></summary>
| 路径 | 仓库 | 说明 |
| --- | --- | --- |
| [build/installer](https://github.com/beclab/olares/tree/main/build/installer) | | 用于生成安装程序构建的模板。 |
| [build/manifest](https://github.com/beclab/olares/tree/main/build/manifest) | | 安装构建镜像列表模板。 |
| [libs/fs-lib](https://github.com/beclab/olares/tree/main/libs) | <https://github.com/beclab/fs-lib> | 基于 JuiceFS 实现的 iNotify 兼容接口的SDK库。 |
| [scripts](https://github.com/beclab/olares/tree/main/scripts) | | 生成安装程序构建的辅助脚本。 |
</details>
* **`apps`**: 用于存放系统应用,主要是 `larepass` 的代码。
* **`cli`**: 用于存放 `olares-cli`Olares 的命令行界面工具)的代码。
* **`daemon`**: 用于存放 `olaresd`(系统守护进程)的代码。
* **`docs`**: 用于存放 Olares 项目的文档。
* **`framework`**: 用来存放 Olares 系统服务代码。
* **`infrastructure`**: 用于存放计算存储网络GPU 等基础设施的代码。
* **`platform`**: 用于存放数据库、消息队列等云原生组件的代码。
* **`vendor`**: 用于存放来自第三方硬件供应商的代码。
## 社区贡献
我们欢迎任何形式的贡献!
- 如果您想在 Olares 上开发自己的应用,请参考:<br>
https://docs.olares.xyz/developer/develop/
https://docs.olares.com/developer/develop/
- 如果您想帮助改进 Olares请参考<br>
https://docs.olares.xyz/developer/contribute/olares.html
https://docs.olares.com/developer/contribute/olares.html
## 社区支持

View File

@@ -18,30 +18,65 @@
</div>
https://github.com/user-attachments/assets/3089a524-c135-4f96-ad2b-c66bf4ee7471
*Olaresを使って、ローカルAIアシスタントを構築し、データを場所を問わず同期し、ワークスペースをセルフホストし、独自のメディアをストリーミングし、その他多くのことを実現できます。*
<p align="center">
<a href="https://olares.xyz">ウェブサイト</a> ·
<a href="https://docs.olares.xyz">ドキュメント</a> ·
<a href="https://olares.xyz/larepass">LarePassをダウンロード</a> ·
<a href="https://olares.com">ウェブサイト</a> ·
<a href="https://docs.olares.com">ドキュメント</a> ·
<a href="https://olares.com/larepass">LarePassをダウンロード</a> ·
<a href="https://github.com/beclab/apps">Olaresアプリ</a> ·
<a href="https://space.olares.xyz">Olares Space</a>
<a href="https://space.olares.com">Olares Space</a>
</p>
> [!IMPORTANT]
> 最近、TerminusからOlaresへのリブランディングを完了しました。詳細については、[リブランディングブログ](https://blog.olares.xyz/terminus-is-now-olares/)をご覧ください。
> *パブリッククラウドを基盤とする現代のインターネットは、あなたの個人データのプライバシーをますます脅かしています。ChatGPT、Midjourney、Facebookといったサービスへの依存が深まるにつれ、デジタル主権に対するあなたのコントロールも弱まっています。あなたのデータは他者のサーバーに保存され、その利用規約に縛られ、追跡され、検閲されているのです。*
>
>*今こそ、変革の時です。*
Olaresを使用して、ハードウェアをAIホームサーバーに変換します。Olaresは、ローカルAIのためのオープンソース主権クラウドOSです。
![自身のデジタル](https://file.bttcdn.com/github/olares/public-cloud-to-personal-cloud.jpg)
- **最先端のAIモデルを自分の条件で実行**: LLaMA、Stable Diffusion、Whisper、Flux.1などの強力なオープンAIモデルをハードウェア上で簡単にホストし、AI環境を完全に制御します。
- **簡単にデプロイ**: Olares Marketから幅広いオープンソースAIアプリを数クリックで発見してインストールします。複雑な設定やセットアップは不要です。
- **いつでもどこでもアクセス**: ブラウザを通じて、必要なときにAIアプリやモデルにアクセスします。
- **統合されたAIでスマートなAI体験**: [Model Context Protocol](https://spec.modelcontextprotocol.io/specification/)MCPに似たメカニズムを使用して、OlaresはAIモデルとAIアプリ、およびプライベートデータセットをシームレスに接続します。これにより、ニーズに応じて適応する高度にパーソナライズされたコンテキスト対応のAIインタラクションが実現します。
私たちは、あなたが自身のデジタルライフをコントロールする基本的な権利を有すると確信しています。この権利を守る最も効果的な方法は、あなたのデータをローカルの、あなた自身のハードウェア上でホストすることです。
Olaresは、あなたが自身のデジタル資産をローカルで容易に所有し管理できるよう設計された、オープンソースのパーソナルクラウドOSです。もはやパブリッククラウドサービスに依存する必要はありません。Olares上で、例えばOllamaを利用した大規模言語モデルのホスティング、SD WebUIによる画像生成、Mastodonを用いた検閲のないソーシャルスペースの構築など、強力なオープンソースの代替サービスやアプリケーションをローカルにデプロイできます。Olaresは、クラウドコンピューティングの絶大な力を活用しつつ、それを完全に自身のコントロール下に置くことを可能にします。
> 🌟 *新しいリリースや更新についての通知を受け取るために、スターを付けてください。*
## アーキテクチャ
パブリッククラウドは、IaaS (Infrastructure as a Service)、PaaS (Platform as a Service)、SaaS (Software as a Service) といったサービスレイヤーで構成されています。Olaresは、これら各レイヤーに対するオープンソースの代替ソリューションを提供しています。
![Olaresのアーキテクチ](https://file.bttcdn.com/github/olares/olares-architecture.jpg)
各コンポーネントの詳細については、[Olares アーキテクチャ](https://docs.olares.com/manual/system-architecture.html)(英語版)をご参照ください。
> 🔍**OlaresとNASの違いは何ですか**
>
> Olaresは、ワンストップのセルフホスティング・パーソナルクラウド体験の実現を目指しています。そのコア機能とユーザーの位置付けは、ネットワークストレージに特化した従来のNASとは大きく異なります。詳細は、[OlaresとNASの比較](https://docs.olares.com/manual/olares-vs-nas.html)(英語版)をご参照ください。
## 機能
Olaresは、セキュリティ、使いやすさ、開発の柔軟性を向上させるための幅広い機能を提供します
- **エンタープライズグレードのセキュリティ**: Tailscale、Headscale、Cloudflare Tunnel、FRPを使用してネットワーク構成を簡素化します。
- **安全で許可のないアプリケーションエコシステム**: サンドボックス化によりアプリケーションの分離とセキュリティを確保します。
- **統一ファイルシステムとデータベース**: 自動スケーリング、バックアップ、高可用性を提供します。
- **シングルサインオン**: 一度ログインするだけで、Olares内のすべてのアプリケーションに共有認証サービスを使用してアクセスできます。
- **AI機能**: GPU管理、ローカルAIモデルホスティング、プライベートナレッジベースの包括的なソリューションを提供し、データプライバシーを維持します。
- **内蔵アプリケーション**: ファイルマネージャー、同期ドライブ、ボールト、リーダー、アプリマーケット、設定、ダッシュボードを含みます。
- **どこからでもシームレスにアクセス**: モバイル、デスクトップ、ブラウザ用の専用クライアントを使用して、どこからでもデバイスにアクセスできます。
- **開発ツール**: アプリケーションの開発と移植を容易にする包括的な開発ツールを提供します。
以下はUIのスクリーンショットプレビューです。
| **デスクトップ:馴染みやすく効率的なアクセスポイント** | **ファイルマネージャー:データを安全に保管** |
| :--------: | :-------: |
| ![桌面](https://file.bttcdn.com/github/terminus/v2/desktop.jpg) | ![文件](https://file.bttcdn.com/github/terminus/v2/files.jpg) |
| **Vault安心のパスワード管理**|**マーケット:コントロール可能なアプリエコシステム** |
| ![vault](https://file.bttcdn.com/github/terminus/v2/vault.jpg) | ![市场](https://file.bttcdn.com/github/terminus/v2/market.jpg) |
| **Wiseあなただけのデジタルガーデン** | **設定Olaresを効率的に管理** |
| ![设置](https://file.bttcdn.com/github/terminus/v2/wise.jpg) | ![](https://file.bttcdn.com/github/terminus/v2/settings.jpg) |
| **ダッシュボードOlaresを継続的に監視** | **プロフィール:ユニークなパーソナルページ** |
| ![面板](https://file.bttcdn.com/github/terminus/v2/dashboard.jpg) | ![profile](https://file.bttcdn.com/github/terminus/v2/profile.jpg) |
| **Studio開発、デバッグ、デプロイをワンストップで**|**コントロールパネルKubernetesクラスターを簡単に管理** |
| ![Devbox](https://file.bttcdn.com/github/terminus/v2/devbox.jpg) | ![控制中心](https://file.bttcdn.com/github/terminus/v2/controlhub.jpg)|
## なぜOlaresなのか
以下の理由とシナリオで、Olaresはプライベートで強力かつ安全な主権クラウド体験を提供します
@@ -66,121 +101,39 @@ Olaresを使用して、ハードウェアをAIホームサーバーに変換し
Olaresは以下のLinuxプラットフォームで動作検証を完了しています
- Ubuntu 20.04 LTS 以降
- Ubuntu 24.04 LTS 以降
- Debian 11 以降
> **追加インストール手順**
> Olares は macOS、Windows、PVE、Raspberry Pi などのプラットフォームや、Linux 上での Docker Compose を用いたインストールにも対応しています。>ただし、これらの方法は開発およびテスト環境専用です。詳しくは[追加インストール手順](https://docs.olares.xyz/developer/install/additional-installations.html)をご参照ください。
### Olaresのセットアップ
自分のデバイスでOlaresを始めるには、[はじめにガイド](https://docs.olares.xyz/manual/get-started/)に従ってステップバイステップの手順を確認してください。
自分のデバイスでOlaresを始めるには、[はじめにガイド](https://docs.olares.com/manual/get-started/)に従ってステップバイステップの手順を確認してください。
## アーキテクチャ
Olaresのアーキテクチャは、次の2つの基本原則に基づいています
- Androidの設計思想を取り入れ、ソフトウェアの権限と対話性を制御することで、システムの安全かつ円滑な運用を実現します。
- クラウドネイティブ技術を活用し、ハードウェアとミドルウェアサービスを効率的に管理します。
![Olaresのアーキテクチ](https://file.bttcdn.com/github/terminus/v2/olares-arch-3.png)
各コンポーネントの詳細については、[Olares アーキテクチャ](https://docs.olares.xyz/manual/system-architecture.html)(英語版)をご参照ください。
## 機能
Olaresは、セキュリティ、使いやすさ、開発の柔軟性を向上させるための幅広い機能を提供します
- **エンタープライズグレードのセキュリティ**: Tailscale、Headscale、Cloudflare Tunnel、FRPを使用してネットワーク構成を簡素化します。
- **安全で許可のないアプリケーションエコシステム**: サンドボックス化によりアプリケーションの分離とセキュリティを確保します。
- **統一ファイルシステムとデータベース**: 自動スケーリング、バックアップ、高可用性を提供します。
- **シングルサインオン**: 一度ログインするだけで、Olares内のすべてのアプリケーションに共有認証サービスを使用してアクセスできます。
- **AI機能**: GPU管理、ローカルAIモデルホスティング、プライベートナレッジベースの包括的なソリューションを提供し、データプライバシーを維持します。
- **内蔵アプリケーション**: ファイルマネージャー、同期ドライブ、ボールト、リーダー、アプリマーケット、設定、ダッシュボードを含みます。
- **どこからでもシームレスにアクセス**: モバイル、デスクトップ、ブラウザ用の専用クライアントを使用して、どこからでもデバイスにアクセスできます。
- **開発ツール**: アプリケーションの開発と移植を容易にする包括的な開発ツールを提供します。
## プロジェクトナビゲーション
Olaresは、GitHubで公開されている多数のコードリポジトリで構成されています。現在のリポジトリは、オペレーティングシステムの最終コンパイル、パッケージング、インストール、およびアップグレードを担当しており、特定の変更は主に対応するリポジトリで行われます。
> [!NOTE]
> 現在、Olaresのサブプロジェクトのコードを当リポジトリへ移行する作業を進めています。この作業が完了するまでには数ヶ月を要する見込みです。完了後には、当リポジトリを通じてOlaresシステムの全貌をご覧いただけるようになります。
以下の表は、Olaresのプロジェクトディレクトリと対応するリポジトリを一覧にしたものです。興味のあるものを見つけてください
このセクションでは、Olares リポジトリ内の主要なディレクトリをリストアップしています
<details>
<summary><b>フレームワークコンポーネント</b></summary>
| ディレクトリ | リポジトリ | 説明 |
| --- | --- | --- |
| [frameworks/app-service](https://github.com/beclab/olares/tree/main/frameworks/app-service) | <https://github.com/beclab/app-service> | システムフレームワークコンポーネントで、システム内のすべてのアプリのライフサイクル管理とさまざまなセキュリティ制御を提供します。 |
| [frameworks/backup-server](https://github.com/beclab/olares/tree/main/frameworks/backup-server) | <https://github.com/beclab/backup-server> | システムフレームワークコンポーネントで、定期的なフルまたは増分クラスターのバックアップサービスを提供します。 |
| [frameworks/bfl](https://github.com/beclab/olares/tree/main/frameworks/bfl) | <https://github.com/beclab/bfl> | ランチャーのバックエンドBFL、ユーザーアクセスポイントとして機能し、さまざまなバックエンドサービスのインターフェースを集約およびプロキシします。 |
| [frameworks/GPU](https://github.com/beclab/olares/tree/main/frameworks/GPU) | <https://github.com/grgalex/nvshare> | 複数のプロセスまたはKubernetes上で実行されるコンテナが同じ物理GPU上で同時に安全に実行できるようにするGPU共有メカニズムで、各プロセスが全GPUメモリを利用できます。 |
| [frameworks/l4-bfl-proxy](https://github.com/beclab/olares/tree/main/frameworks/l4-bfl-proxy) | <https://github.com/beclab/l4-bfl-proxy> | BFLの第4層ネットワークプロキシ。SNIを事前に読み取ることで、ユーザーのIngressに通過する動的ルートを提供します。 |
| [frameworks/osnode-init](https://github.com/beclab/olares/tree/main/frameworks/osnode-init) | <https://github.com/beclab/osnode-init> | 新しいノードがクラスターに参加する際にノードデータを初期化するシステムフレームワークコンポーネント。 |
| [frameworks/system-server](https://github.com/beclab/olares/tree/main/frameworks/system-server) | <https://github.com/beclab/system-server> | システムランタイムフレームワークの一部として、アプリ間のセキュリティコールのメカニズムを提供します。 |
| [frameworks/tapr](https://github.com/beclab/olares/tree/main/frameworks/tapr) | <https://github.com/beclab/tapr> | Olaresアプリケーションランタイムコンポーネント。 |
</details>
<details>
<summary><b>システムレベルのアプリケーションとサービス</b></summary>
| ディレクトリ | リポジトリ | 説明 |
| --- | --- | --- |
| [apps/analytic](https://github.com/beclab/olares/tree/main/apps/analytic) | <https://github.com/beclab/analytic> | [Umami](https://github.com/umami-software/umami)に基づいて開発されたAnalyticは、Google Analyticsのシンプルで高速、プライバシー重視の代替品です。 |
| [apps/market](https://github.com/beclab/olares/tree/main/apps/market) | <https://github.com/beclab/market> | このリポジトリは、Olaresのアプリケーションマーケットのフロントエンド部分をデプロイします。 |
| [apps/market-server](https://github.com/beclab/olares/tree/main/apps/market-server) | <https://github.com/beclab/market> | このリポジトリは、Olaresのアプリケーションマーケットのバックエンド部分をデプロイします。 |
| [apps/argo](https://github.com/beclab/olares/tree/main/apps/argo) | <https://github.com/argoproj/argo-workflows> | ローカル推奨アルゴリズムのコンテナ実行をオーケストレーションするワークフローエンジン。 |
| [apps/desktop](https://github.com/beclab/olares/tree/main/apps/desktop) | <https://github.com/beclab/desktop> | システムの内蔵デスクトップアプリケーション。 |
| [apps/devbox](https://github.com/beclab/olares/tree/main/apps/devbox) | <https://github.com/beclab/devbox> | Olaresアプリケーションの移植と開発のための開発者向けIDE。 |
| [apps/vault](https://github.com/beclab/olares/tree/main/apps/vault) | <https://github.com/beclab/termipass> | [Padloc](https://github.com/padloc/padloc)に基づいて開発された、あらゆる規模のチームや企業向けの無料の1PasswordおよびBitwardenの代替品。DID、Olares ID、およびOlaresデバイスの管理を支援するクライアントとして機能します。 |
| [apps/files](https://github.com/beclab/olares/tree/main/apps/files) | <https://github.com/beclab/files> | [Filebrowser](https://github.com/filebrowser/filebrowser)から変更された内蔵ファイルマネージャーで、Drive、Sync、およびさまざまなOlares物理ード上のファイルの管理を提供します。 |
| [apps/notifications](https://github.com/beclab/olares/tree/main/apps/notifications) | <https://github.com/beclab/notifications> | Olaresの通知システム |
| [apps/profile](https://github.com/beclab/olares/tree/main/apps/profile) | <https://github.com/beclab/profile> | OlaresのLinktree代替品 |
| [apps/rsshub](https://github.com/beclab/olares/tree/main/apps/rsshub) | <https://github.com/beclab/rsshub> | [RssHub](https://github.com/DIYgod/RSSHub)に基づいたRSS購読管理ツール。 |
| [apps/settings](https://github.com/beclab/olares/tree/main/apps/settings) | <https://github.com/beclab/settings> | 内蔵システム設定。 |
| [apps/system-apps](https://github.com/beclab/olares/tree/main/apps/system-apps) | <https://github.com/beclab/system-apps> | _kubesphere/console_プロジェクトに基づいて構築されたsystem-serviceは、視覚的なダッシュボードと機能豊富なControlHubを通じて、システムの実行状態とリソース使用状況を理解し、制御するためのセルフホストクラウドプラットフォームを提供します。 |
| [apps/wizard](https://github.com/beclab/olares/tree/main/apps/wizard) | <https://github.com/beclab/wizard> | ユーザーにシステムのアクティベーションプロセスを案内するウィザードアプリケーション。 |
</details>
<details>
<summary><b>サードパーティコンポーネントとサービス</b></summary>
| ディレクトリ | リポジトリ | 説明 |
| --- | --- | --- |
| [third-party/authelia](https://github.com/beclab/olares/tree/main/third-party/authelia) | <https://github.com/beclab/authelia> | Webポータルを介してアプリケーションに二要素認証とシングルサインオンSSOを提供するオープンソースの認証および認可サーバー。 |
| [third-party/headscale](https://github.com/beclab/olares/tree/main/third-party/headscale) | <https://github.com/beclab/headscale> | OlaresでのTailscaleコントロールサーバーのオープンソース自ホスト実装で、LarePassで異なるデバイス間でTailscaleを管理します。 |
| [third-party/infisical](https://github.com/beclab/olares/tree/main/third-party/infisical) | <https://github.com/beclab/infisical> | チーム/インフラストラクチャ間でシークレットを同期し、シークレットの漏洩を防ぐオープンソースのシーク<E383BC><E382AF><EFBFBD><E38383>管理プラットフォーム。 |
| [third-party/juicefs](https://github.com/beclab/olares/tree/main/third-party/juicefs) | <https://github.com/beclab/juicefs-ext> | RedisとS3の上に構築された分散POSIXファイルシステムで、異なるード上のアプリがPOSIXインターフェースを介して同じデータにアクセスできるようにします。 |
| [third-party/ks-console](https://github.com/beclab/olares/tree/main/third-party/ks-console) | <https://github.com/kubesphere/console> | Web GUIを介してクラスター管理を可能にするKubesphereコンソール。 |
| [third-party/ks-installer](https://github.com/beclab/olares/tree/main/third-party/ks-installer) | <https://github.com/beclab/ks-installer-ext> | クラスターリソース定義に基づいて自動的にKubesphereクラスターを作成するKubesphereインストーラーコンポーネント。 |
| [third-party/kube-state-metrics](https://github.com/beclab/olares/tree/main/third-party/kube-state-metrics) | <https://github.com/beclab/kube-state-metrics> | kube-state-metricsKSMは、Kubernetes APIサーバーをリッスンし、オブジェクトの状態に関するメトリックを生成するシンプルなサービスです。 |
| [third-party/notification-manager](https://github.com/beclab/olares/tree/main/third-party/notification-manager) | <https://github.com/beclab/notification-manager-ext> | 複数の通知チャネルの統一管理と通知内容のカスタム集約を提供するKubesphereの通知管<E79FA5><E7AEA1>コンポーネント。 |
| [third-party/predixy](https://github.com/beclab/olares/tree/main/third-party/predixy) | <https://github.com/beclab/predixy> | 利用可能なードを自動的に識別し、名前空間の分離を追加するRedisクラスターのプロキシサービス。 |
| [third-party/redis-cluster-operator](https://github.com/beclab/olares/tree/main/third-party/redis-cluster-operator) | <https://github.com/beclab/redis-cluster-operator> | Kubernetesに基づいてRedisクラスターを作成および管理するためのクラウドネイティブツール。 |
| [third-party/seafile-server](https://github.com/beclab/olares/tree/main/third-party/seafile-server) | <https://github.com/beclab/seafile-server> | データストレージを処理するSeafile同期ドライブのバックエンドサービス。 |
| [third-party/seahub](https://github.com/beclab/olares/tree/main/third-party/seahub) | <https://github.com/beclab/seahub> | ファイル共有、データ同期などを処理するSeafile同期ドライブのフロントエンドおよびミドルウェアサービス。 |
| [third-party/tailscale](https://github.com/beclab/olares/tree/main/third-party/tailscale) | <https://github.com/tailscale/tailscale> | TailscaleはすべてのプラットフォームのLarePassに統合されています。 |
</details>
<details>
<summary><b>追加のライブラリとコンポーネント</b></summary>
| ディレクトリ | リポジトリ | 説明 |
| --- | --- | --- |
| [build/installer](https://github.com/beclab/olares/tree/main/build/installer) | | インストーラービルドを生成するためのテンプレート。 |
| [build/manifest](https://github.com/beclab/olares/tree/main/build/manifest) | | インストールビルドイメージリストテンプレート。 |
| [libs/fs-lib](https://github.com/beclab/olares/tree/main/libs) | <https://github.com/beclab/fs-lib> | JuiceFSに基づいて実装されたiNotify互換インターフェースのSDKライブラリ。 |
| [scripts](https://github.com/beclab/olares/tree/main/scripts) | | インストーラービルドを生成するための補助スクリプト。 |
</details>
* **`apps`**: システムアプリケーションのコードが含まれており、主に `larepass` 用です。
* **`cli`**: Olares のコマンドラインインターフェースツールである `olares-cli` のコードが含まれています。
* **`daemon`**: システムデーモンプロセスである `olaresd` のコードが含まれています。
* **`docs`**: プロジェクトのドキュメントが含まれています。
* **`framework`**: Olares システムサービスが含まれています。
* **`infrastructure`**: コンピューティング、ストレージ、ネットワーキング、GPU などのインフラストラクチャコンポーネントに関連するコードが含まれています。
* **`platform`**: データベースやメッセージキューなどのクラウドネイティブコンポーネントのコードが含まれています。
* **`vendor`**: サードパーティのハードウェアベンダーからのコードが含まれています。
## Olaresへの貢献
あらゆる形での貢献を歓迎します:
- Olaresで独自のアプリケーションを開発したい場合は、以下を参照してください<br>
https://docs.olares.xyz/developer/develop/
https://docs.olares.com/developer/develop/
- Olaresの改善に協力したい場合は、以下を参照してください<br>
https://docs.olares.xyz/developer/contribute/olares.html
https://docs.olares.com/developer/contribute/olares.html
## コミュニティと連絡先

View File

@@ -1,26 +0,0 @@
apiVersion: v2
name: desktop
description: A Helm chart for Kubernetes
maintainers:
- name: bytetrade
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.0.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"

View File

@@ -1,749 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: edge-desktop
namespace: {{ .Release.Namespace }}
labels:
app: edge-desktop
applications.app.bytetrade.io/author: bytetrade.io
annotations:
applications.app.bytetrade.io/version: '0.0.1'
spec:
replicas: 1
selector:
matchLabels:
app: edge-desktop
template:
metadata:
labels:
app: edge-desktop
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
priorityClassName: "system-cluster-critical"
initContainers:
- args:
- -it
- authelia-backend.os-system:9091,system-server.user-system-{{ .Values.bfl.username }}:80
image: owncloudci/wait-for:latest
imagePullPolicy: IfNotPresent
name: check-auth
- name: terminus-sidecar-init
image: openservicemesh/init:v1.2.3
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
capabilities:
add:
- NET_ADMIN
runAsNonRoot: false
runAsUser: 0
command:
- /bin/sh
- -c
- |
iptables-restore --noflush <<EOF
# sidecar interception rules
*nat
:PROXY_IN_REDIRECT - [0:0]
:PROXY_INBOUND - [0:0]
-A PROXY_IN_REDIRECT -p tcp -j REDIRECT --to-port 15003
-A PROXY_INBOUND -p tcp --dport 15000 -j RETURN
-A PROXY_INBOUND -p tcp -j PROXY_IN_REDIRECT
-A PREROUTING -p tcp -j PROXY_INBOUND
COMMIT
EOF
env:
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
containers:
- name: edge-desktop
image: beclab/desktop:v0.2.59
imagePullPolicy: IfNotPresent
securityContext:
runAsNonRoot: false
runAsUser: 0
ports:
- containerPort: 80
env:
- name: apiServerURL
value: http://bfl.{{ .Release.Namespace }}:8080
- name: desktop-server
image: beclab/desktop-server:v0.2.59
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
volumeMounts:
- name: userspace-dir
mountPath: /Home
ports:
- containerPort: 3000
env:
- name: OS_SYSTEM_SERVER
value: system-server.user-system-{{ .Values.bfl.username }}
- name: OS_APP_SECRET
value: '{{ .Values.os.desktop.appSecret }}'
- name: OS_APP_KEY
value: {{ .Values.os.desktop.appKey }}
- name: APP_SERVICE_SERVICE_HOST
value: app-service.os-system
- name: APP_SERVICE_SERVICE_PORT
value: '6755'
- name: terminus-envoy-sidecar
image: bytetrade/envoy:v1.25.11
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
ports:
- name: proxy-admin
containerPort: 15000
- name: proxy-inbound
containerPort: 15003
volumeMounts:
- name: terminus-sidecar-config
readOnly: true
mountPath: /etc/envoy/envoy.yaml
subPath: envoy.yaml
command:
- /usr/local/bin/envoy
- --log-level
- debug
- -c
- /etc/envoy/envoy.yaml
env:
- name: POD_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: terminus-ws-sidecar
image: 'beclab/ws-gateway:v1.0.5'
imagePullPolicy: IfNotPresent
command:
- /ws-gateway
env:
- name: WS_PORT
value: '3010'
- name: WS_URL
value: /websocket/message
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumes:
- name: userspace-dir
hostPath:
type: Directory
path: '{{ .Values.userspace.userData }}'
- name: terminus-sidecar-config
configMap:
name: sidecar-ws-configs
items:
- key: envoy.yaml
path: envoy.yaml
---
apiVersion: v1
kind: Service
metadata:
name: edge-desktop
namespace: {{ .Release.Namespace }}
spec:
type: ClusterIP
selector:
app: edge-desktop
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: {{ .Release.Namespace }}
name: internal-kubectl
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ .Release.Namespace }}:edge-desktop-rb
subjects:
- kind: ServiceAccount
namespace: {{ .Release.Namespace }}
name: internal-kubectl
roleRef:
# kind: Role
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: sys.bytetrade.io/v1alpha1
kind: ProviderRegistry
metadata:
name: app-event-watcher
namespace: user-system-{{ .Values.bfl.username }}
spec:
callbacks:
- filters:
type:
- app-installation-event
op: Create
uri: /server/app_installation_event
- filters:
type:
- entrance-state-event
op: Create
uri: /server/entrance_state_event
- filters:
type:
- settings-event
op: Create
uri: /server/app_installation_event
- filters:
type:
- system-upgrade-event
op: Create
uri: /server/system_upgrade_event
dataType: event
deployment: edge-desktop
description: desktop event watcher
endpoint: edge-desktop.{{ .Release.Namespace }}
group: message-disptahcer.system-server
kind: watcher
namespace: {{ .Release.Namespace }}
version: v1
status:
state: active
---
apiVersion: sys.bytetrade.io/v1alpha1
kind: ProviderRegistry
metadata:
name: intent-api
namespace: user-system-{{ .Values.bfl.username }}
spec:
dataType: legacy_api
deployment: edge-desktop
description: edge-desktop legacy api
endpoint: edge-desktop.{{ .Release.Namespace }}
group: api.intent
kind: provider
namespace: {{ .Release.Namespace }}
version: v1
opApis:
- name: POST
uri: /server/intent/send
status:
state: active
---
apiVersion: sys.bytetrade.io/v1alpha1
kind: ProviderRegistry
metadata:
name: intent-api-v2
namespace: user-system-{{ .Values.bfl.username }}
spec:
dataType: legacy_api
deployment: edge-desktop
description: edge-desktop legacy api
endpoint: edge-desktop.{{ .Release.Namespace }}
group: api.intent
kind: provider
namespace: {{ .Release.Namespace }}
version: v2
opApis:
- name: POST
uri: /server/intent/send
status:
state: active
---
apiVersion: sys.bytetrade.io/v1alpha1
kind: ProviderRegistry
metadata:
name: destktop-ai-provider
namespace: user-system-{{ .Values.bfl.username }}
spec:
dataType: ai_message
deployment: edge-desktop
description: search ai callback
endpoint: edge-desktop.{{ .Release.Namespace }}
group: service.desktop
kind: provider
namespace: {{ .Release.Namespace }}
opApis:
- name: AIMessage
uri: /server/ai_message
version: v1
status:
state: active
---
apiVersion: sys.bytetrade.io/v1alpha1
kind: ProviderRegistry
metadata:
name: desktop-notification
namespace: user-system-{{ .Values.bfl.username }}
spec:
dataType: notification
deployment: edge-desktop
description: send notification to desktop client
endpoint: edge-desktop.{{ .Release.Namespace }}
group: service.desktop
kind: provider
namespace: {{ .Release.Namespace }}
opApis:
- name: Create
uri: /notification/create
- name: Query
uri: /notification/query
version: v1
status:
state: active
---
apiVersion: sys.bytetrade.io/v1alpha1
kind: ApplicationPermission
metadata:
name: desktop
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: desktop
appid: desktop
key: {{ .Values.os.desktop.appKey }}
secret: {{ .Values.os.desktop.appSecret }}
permissions:
- dataType: files
group: service.files
ops:
- Query
version: v1
- dataType: datastore
group: service.bfl
ops:
- GetKey
- GetKeyPrefix
- SetKey
- DeleteKey
version: v1
- dataType: app
group: service.bfl
ops:
- UserApps
version: v1
- dataType: app
group: service.appstore
ops:
- UninstallDevApp
version: v1
status:
state: active
---
apiVersion: sys.bytetrade.io/v1alpha1
kind: ProviderRegistry
metadata:
name: desktop-config
namespace: user-system-{{ .Values.bfl.username }}
spec:
dataType: config
deployment: edge-desktop
description: Set Desktop Config
endpoint: edge-desktop.{{ .Release.Namespace }}
group: service.desktop
kind: provider
namespace: {{ .Release.Namespace }}
opApis:
- name: Update
uri: /server/updateDesktopConfig
version: v1
status:
state: active
---
apiVersion: v1
data:
envoy.yaml: |
admin:
access_log_path: "/dev/stdout"
address:
socket_address:
address: 0.0.0.0
port_value: 15000
static_resources:
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 15003
listener_filters:
- name: envoy.filters.listener.original_dst
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.listener.original_dst.v3.OriginalDst
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: desktop_http
upgrade_configs:
- upgrade_type: websocket
- upgrade_type: tailscale-control-protocol
skip_xff_append: false
max_request_headers_kb: 500
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: service
domains: ["*"]
routes:
- match:
prefix: "/"
route:
cluster: original_dst
timeout: 180s
http_protocol_options:
accept_http_10: true
http_filters:
- name: envoy.filters.http.ext_authz
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthz
http_service:
path_prefix: '/api/verify/'
server_uri:
uri: authelia-backend.os-system:9091
cluster: authelia
timeout: 2s
authorization_request:
allowed_headers:
patterns:
- exact: accept
- exact: cookie
- exact: proxy-authorization
- prefix: x-unauth-
- exact: x-authorization
- exact: x-bfl-user
- exact: x-real-ip
- exact: terminus-nonce
headers_to_add:
- key: X-Forwarded-Method
value: '%REQ(:METHOD)%'
- key: X-Forwarded-Proto
value: '%REQ(:SCHEME)%'
- key: X-Forwarded-Host
value: '%REQ(:AUTHORITY)%'
- key: X-Forwarded-Uri
value: '%REQ(:PATH)%'
- key: X-Forwarded-For
value: '%DOWNSTREAM_REMOTE_ADDRESS_WITHOUT_PORT%'
authorization_response:
allowed_upstream_headers:
patterns:
- exact: authorization
- exact: proxy-authorization
- prefix: remote-
- prefix: authelia-
allowed_client_headers:
patterns:
- exact: set-cookie
allowed_client_headers_on_success:
patterns:
- exact: set-cookie
failure_mode_allow: false
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
- name: listener_image
address:
socket_address:
address: 127.0.0.1
port_value: 15080
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: tapr_http
http_protocol_options:
accept_http_10: true
upgrade_configs:
- upgrade_type: websocket
skip_xff_append: false
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: service
domains: ["*"]
routes:
- match:
prefix: "/images/upload"
route:
cluster: images
http_protocol_options:
accept_http_10: true
http_filters:
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
clusters:
- name: original_dst
connect_timeout: 120s
type: ORIGINAL_DST
lb_policy: CLUSTER_PROVIDED
common_http_protocol_options:
idle_timeout: 10s
- name: authelia
connect_timeout: 2s
type: LOGICAL_DNS
dns_lookup_family: V4_ONLY
dns_refresh_rate: 600s
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: authelia
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: authelia-backend.os-system
port_value: 9091
- name: images
connect_timeout: 5s
type: LOGICAL_DNS
dns_lookup_family: V4_ONLY
dns_refresh_rate: 600s
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: images
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: tapr-images-svc.user-system-{{ .Values.bfl.username }}
port_value: 8080
kind: ConfigMap
metadata:
name: sidecar-configs
namespace: {{ .Release.Namespace }}
---
apiVersion: v1
data:
envoy.yaml: |
admin:
access_log_path: "/dev/stdout"
address:
socket_address:
address: 0.0.0.0
port_value: 15000
static_resources:
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 15003
listener_filters:
- name: envoy.filters.listener.original_dst
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.listener.original_dst.v3.OriginalDst
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: desktop_http
upgrade_configs:
- upgrade_type: websocket
- upgrade_type: tailscale-control-protocol
skip_xff_append: false
max_request_headers_kb: 500
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: service
domains: ["*"]
routes:
- match:
prefix: "/ws"
route:
cluster: ws_original_dst
- match:
prefix: "/"
route:
cluster: original_dst
timeout: 180s
http_protocol_options:
accept_http_10: true
http_filters:
- name: envoy.filters.http.ext_authz
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthz
http_service:
path_prefix: '/api/verify/'
server_uri:
uri: authelia-backend.os-system:9091
cluster: authelia
timeout: 2s
authorization_request:
allowed_headers:
patterns:
- exact: accept
- exact: cookie
- exact: proxy-authorization
- prefix: x-unauth-
- exact: x-authorization
- exact: x-bfl-user
- exact: x-real-ip
- exact: terminus-nonce
headers_to_add:
- key: X-Forwarded-Method
value: '%REQ(:METHOD)%'
- key: X-Forwarded-Proto
value: '%REQ(:SCHEME)%'
- key: X-Forwarded-Host
value: '%REQ(:AUTHORITY)%'
- key: X-Forwarded-Uri
value: '%REQ(:PATH)%'
- key: X-Forwarded-For
value: '%DOWNSTREAM_REMOTE_ADDRESS_WITHOUT_PORT%'
authorization_response:
allowed_upstream_headers:
patterns:
- exact: authorization
- exact: proxy-authorization
- prefix: remote-
- prefix: authelia-
allowed_client_headers:
patterns:
- exact: set-cookie
allowed_client_headers_on_success:
patterns:
- exact: set-cookie
failure_mode_allow: false
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
- name: listener_image
address:
socket_address:
address: 127.0.0.1
port_value: 15080
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: tapr_http
http_protocol_options:
accept_http_10: true
upgrade_configs:
- upgrade_type: websocket
skip_xff_append: false
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: service
domains: ["*"]
routes:
- match:
prefix: "/images/upload"
route:
cluster: images
http_filters:
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
clusters:
- name: original_dst
connect_timeout: 5000s
type: ORIGINAL_DST
lb_policy: CLUSTER_PROVIDED
common_http_protocol_options:
idle_timeout: 10s
- name: ws_original_dst
connect_timeout: 5000s
type: LOGICAL_DNS
dns_lookup_family: V4_ONLY
dns_refresh_rate: 600s
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: ws_original_dst
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: localhost
port_value: 40010
- name: authelia
connect_timeout: 2s
type: LOGICAL_DNS
dns_lookup_family: V4_ONLY
dns_refresh_rate: 600s
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: authelia
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: authelia-backend.os-system
port_value: 9091
- name: images
connect_timeout: 5s
type: LOGICAL_DNS
dns_lookup_family: V4_ONLY
dns_refresh_rate: 600s
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: images
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: tapr-images-svc.user-system-{{ .Values.bfl.username }}
port_value: 8080
kind: ConfigMap
metadata:
name: sidecar-ws-configs
namespace: {{ .Release.Namespace }}

View File

@@ -1,38 +0,0 @@
bfl:
username: 'test'
url: 'test'
nodeName: test
pvc:
userspace: test
userspace:
userData: test/Home
appData: test/Data
appCache: test
dbdata: test
os:
portfolio:
appKey: '${ks[0]}'
appSecret: test
vault:
appKey: '${ks[0]}'
appSecret: test
desktop:
appKey: '${ks[0]}'
appSecret: test
message:
appKey: '${ks[0]}'
appSecret: test
rss:
appKey: '${ks[0]}'
appSecret: test
search:
appKey: '${ks[0]}'
appSecret: test
search2:
appKey: '${ks[0]}'
appSecret: test
appstore:
appKey: '${ks[0]}'
appSecret: test
kubesphere:
redis_password: ""

View File

@@ -1,26 +0,0 @@
apiVersion: v2
name: files
description: A Helm chart for Kubernetes
maintainers:
- name: bytetrade
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.0.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"

View File

@@ -1,62 +0,0 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "files.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "files.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "files.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "files.labels" -}}
helm.sh/chart: {{ include "files.chart" . }}
{{ include "files.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "files.selectorLabels" -}}
app.kubernetes.io/name: {{ include "files.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "files.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "files.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@@ -1,938 +0,0 @@
{{- $namespace := printf "%s%s" "user-system-" .Values.bfl.username -}}
{{- $zinc_files_secret := (lookup "v1" "Secret" $namespace "zinc-files-secrets") -}}
{{- $password := "" -}}
{{ if $zinc_files_secret -}}
{{ $password = (index $zinc_files_secret "data" "password") }}
{{ else -}}
{{ $password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $redis_password := "" -}}
{{ if $zinc_files_secret -}}
{{ $redis_password = (index $zinc_files_secret "data" "redis_password") }}
{{ else -}}
{{ $redis_password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $redis_password_data := "" -}}
{{ $redis_password_data = $redis_password | b64dec }}
{{- $pg_password := "" -}}
{{ if $zinc_files_secret -}}
{{ $pg_password = (index $zinc_files_secret "data" "pg_password") }}
{{ else -}}
{{ $pg_password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $files_frontend_nats_secret := (lookup "v1" "Secret" $namespace "files-frontend-nats-secrets") -}}
{{- $files_frontend_nats_password := "" -}}
{{ if $files_frontend_nats_secret -}}
{{ $files_frontend_nats_password = (index $files_frontend_nats_secret "data" "files_frontend_nats_password") }}
{{ else -}}
{{ $files_frontend_nats_password = randAlphaNum 16 | b64enc }}
{{- end -}}
---
apiVersion: v1
kind: Secret
metadata:
name: cloud-drive-integration-secrets
namespace: user-system-{{ .Values.bfl.username }}
type: Opaque
data:
pg_password: {{ $pg_password }}
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: cloud-drive-integration-pg
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: cloud-drive-integration
appNamespace: {{ .Release.Namespace }}
middleware: postgres
postgreSQL:
user: cloud_drive_integration_{{ .Values.bfl.username }}
password:
valueFrom:
secretKeyRef:
key: pg_password
name: cloud-drive-integration-secrets
databases:
- name: cloud-drive-integration
---
apiVersion: v1
kind: ConfigMap
metadata:
name: cloud-drive-integration-secrets-auth
namespace: {{ .Release.Namespace }}
data:
redis_password: {{ $redis_password_data }}
redis_addr: redis-cluster-proxy.user-system-{{ .Values.bfl.username }}:6379
redis_host: redis-cluster-proxy.user-system-{{ .Values.bfl.username }}
redis_port: '6379'
---
apiVersion: v1
kind: ConfigMap
metadata:
name: cloud-drive-integration-userspace-data
namespace: {{ .Release.Namespace }}
data:
appData: "{{ .Values.userspace.appData }}"
appCache: "{{ .Values.userspace.appCache }}"
username: "{{ .Values.bfl.username }}"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: files-deployment
namespace: {{ .Release.Namespace }}
labels:
app: files
applications.app.bytetrade.io/name: files
applications.app.bytetrade.io/owner: '{{ .Values.bfl.username }}'
applications.app.bytetrade.io/author: bytetrade.io
annotations:
applications.app.bytetrade.io/icon: https://file.bttcdn.com/appstore/files/icon.png
applications.app.bytetrade.io/title: Files
applications.app.bytetrade.io/version: '0.0.1'
applications.app.bytetrade.io/entrances: '[{"name":"files", "host":"files-service", "port":80,"title":"Files","windowPushState":true}]'
spec:
replicas: 1
selector:
matchLabels:
app: files
template:
metadata:
labels:
app: files
io.bytetrade.app: "true"
annotations:
# support nginx 1.24.3 1.25.3
instrumentation.opentelemetry.io/inject-nginx: "olares-instrumentation"
instrumentation.opentelemetry.io/inject-nginx-container-names: "files-frontend"
instrumentation.opentelemetry.io/inject-go: "olares-instrumentation"
instrumentation.opentelemetry.io/go-container-names: "driver-server"
instrumentation.opentelemetry.io/otel-go-auto-target-exe: "drive"
spec:
serviceAccountName: bytetrade-controller
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
initContainers:
- name: init-data
image: busybox:1.28
securityContext:
privileged: true
runAsNonRoot: false
runAsUser: 0
volumeMounts:
- name: fb-data
mountPath: /appdata
- name: uploads-temp
mountPath: /uploadstemp
command:
- sh
- -c
- |
chown -R 1000:1000 /uploadstemp && \
chown -R 1000:1000 /appdata
- args:
- -it
- authelia-backend.os-system:9091
image: owncloudci/wait-for:latest
imagePullPolicy: IfNotPresent
name: check-auth
- args:
- -it
- nats.user-system-{{ .Values.bfl.username }}:4222
image: owncloudci/wait-for:latest
imagePullPolicy: IfNotPresent
name: check-nats
- name: terminus-sidecar-init
image: openservicemesh/init:v1.2.3
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
capabilities:
add:
- NET_ADMIN
runAsNonRoot: false
runAsUser: 0
command:
- /bin/sh
- -c
- |
iptables-restore --noflush <<EOF
# sidecar interception rules
*nat
:PROXY_IN_REDIRECT - [0:0]
:PROXY_INBOUND - [0:0]
-A PROXY_IN_REDIRECT -p tcp -j REDIRECT --to-port 15003
-A PROXY_INBOUND -p tcp --dport 15000 -j RETURN
-A PROXY_INBOUND -p tcp -j PROXY_IN_REDIRECT
-A PREROUTING -p tcp -j PROXY_INBOUND
COMMIT
EOF
env:
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: init-container
image: 'postgres:16.0-alpine3.18'
command:
- sh
- '-c'
- >-
echo -e "Checking for the availability of PostgreSQL Server deployment"; until psql -h $PGHOST -p $PGPORT -U $PGUSER -d $PGDB -c "SELECT 1"; do sleep 1; printf "-"; done; sleep 5; echo -e " >> PostgreSQL DB Server has started";
env:
- name: PGHOST
value: citus-master-svc.user-system-{{ .Values.bfl.username }}
- name: PGPORT
value: "5432"
- name: PGUSER
value: cloud_drive_integration_{{ .Values.bfl.username }}
- name: PGPASSWORD
value: "{{ $pg_password | b64dec }}"
- name: PGDB
value: user_space_{{ .Values.bfl.username }}_cloud_drive_integration
- name: files-frontend-init
image: beclab/files-frontend:v1.3.61
imagePullPolicy: IfNotPresent
volumeMounts:
- name: app
mountPath: /cp_app
- name: nginx-confd
mountPath: /confd
command:
- sh
- -c
- |
cp -rf /app/* /cp_app/. && cp -rf /etc/nginx/conf.d/* /confd/.
containers:
# - name: gateway
# image: beclab/appdata-gateway:0.1.12
# imagePullPolicy: IfNotPresent
# ports:
# - containerPort: 8080
# env:
# - name: FILES_SERVER_TAG
# value: 'beclab/files-server:v0.2.27'
# - name: NAMESPACE
# valueFrom:
# fieldRef:
# fieldPath: metadata.namespace
# - name: OS_SYSTEM_SERVER
# value: system-server.user-system-{{ .Values.bfl.username }}
# - name: files
# image: beclab/files-server:v0.2.27
# imagePullPolicy: IfNotPresent
# volumeMounts:
# - name: fb-data
# mountPath: /appdata
# - name: userspace-dir
# mountPath: /data/Home
# - name: userspace-app-dir
# mountPath: /data/Application
# - name: watch-dir
# mountPath: /data/Home/Documents
# - name: upload-appdata
# mountPath: /appcache/
# ports:
# - containerPort: 8110
# env:
# - name: ES_ENABLED
# value: 'True'
# - name: WATCHER_ENABLED
# value: 'True'
# - name: cloud-drive-integration_BASE_ENABLED
# value: 'True'
# - name: BFL_NAME
# value: '{{ .Values.bfl.username }}'
# - name: FB_DATABASE
# value: /appdata/database/filebrowser.db
# - name: FB_CONFIG
# value: /appdata/config/settings.json
# - name: FB_ROOT
# value: /data
# - name: OS_SYSTEM_SERVER
# value: system-server.user-system-{{ .Values.bfl.username }}
# - name: OS_APP_SECRET
# value: '{{ .Values.os.files.appSecret }}'
# - name: OS_APP_KEY
# value: {{ .Values.os.files.appKey }}
# - name: ZINC_USER
# value: zincuser-files-{{ .Values.bfl.username }}
# - name: ZINC_PASSWORD
# value: {{ $password | b64dec }}
# - name: ZINC_HOST
# value: zinc-server-svc.user-system-{{ .Values.bfl.username }}
# - name: ZINC_PORT
# value: "80"
# - name: ZINC_INDEX
# value: {{ .Release.Namespace }}_zinc-files
# - name: WATCH_DIR
# value: /data/Home/Documents
# - name: PATH_PREFIX
# value: /data/Home
# - name: REDIS_HOST
# value: redis-cluster-proxy.user-system-{{ .Values.bfl.username }}
# - name: REDIS_PORT
# value: '6379'
# - name: REDIS_USERNAME
# value: ''
# - name: REDIS_PASSWORD
# value: {{ $redis_password | b64dec }}
# - name: REDIS_USE_SSL
# value: 'false'
# # use redis db 0 for redis cache
# - name: REDIS_DB
# value: '0'
# - name: REDIS_URL
# value: 'redis://:{{ $redis_password | b64dec }}@redis-cluster-proxy.user-system-{{ .Values.bfl.username }}:6379/0'
# - name: POD_NAME
# valueFrom:
# fieldRef:
# fieldPath: metadata.name
# - name: NAMESPACE
# valueFrom:
# fieldRef:
# fieldPath: metadata.namespace
# - name: CONTAINER_NAME
# value: files
# - name: NOTIFY_SERVER
# value: fsnotify-svc.user-system-{{ .Values.bfl.username }}:5079
# command:
# - /filebrowser
# - --noauth
- name: files-frontend
image: beclab/docker-nginx-headers-more:ubuntu-v0.1.0
imagePullPolicy: IfNotPresent
securityContext:
runAsNonRoot: false
runAsUser: 0
ports:
- containerPort: 80
env:
- name: NATS_HOST
value: nats.user-system-{{ .Values.bfl.username }}
- name: NATS_PORT
value: '4222'
- name: NATS_USERNAME
value: user-system-{{ .Values.bfl.username }}-files-frontend
- name: NATS_PASSWORD
value: {{ $files_frontend_nats_password | b64dec }}
- name: NATS_SUBJECT
value: terminus.os-system.files-notify
volumeMounts:
- name: userspace-dir
mountPath: /data
- name: app
mountPath: /app
- name: nginx-confd
mountPath: /etc/nginx/conf.d
- name: drive-server
image: beclab/drive:v0.0.72
imagePullPolicy: IfNotPresent
env:
- name: OS_SYSTEM_SERVER
value: system-server.user-system-{{ .Values.bfl.username }}
- name: DATABASE_URL
value: postgres://cloud_drive_integration_{{ .Values.bfl.username }}:{{ $pg_password | b64dec }}@citus-master-svc.user-system-{{ .Values.bfl.username }}:5432/user_space_{{ .Values.bfl.username }}_cloud_drive_integration
- name: REDIS_URL
value: redis://:{{ $redis_password | b64dec }}@redis-cluster-proxy.user-system-{{ .Values.bfl.username }}:6379/0
- name: TASK_EXECUTOR_MAX_THREADS
value: '6'
ports:
- containerPort: 8181
volumeMounts:
- name: upload-data
mountPath: /data/Home
- name: upload-appdata
mountPath: /appdata/
- name: userspace-app-dir
mountPath: /data/Application
- name: data-dir
mountPath: /data
- name: task-executor
image: beclab/driveexecutor:v0.0.72
imagePullPolicy: IfNotPresent
env:
- name: OS_SYSTEM_SERVER
value: system-server.user-system-{{ .Values.bfl.username }}
- name: DATABASE_URL
value: postgres://cloud_drive_integration_{{ .Values.bfl.username }}:{{ $pg_password | b64dec }}@citus-master-svc.user-system-{{ .Values.bfl.username }}:5432/user_space_{{ .Values.bfl.username }}_cloud_drive_integration
- name: REDIS_URL
value: redis://:{{ $redis_password | b64dec }}@redis-cluster-proxy.user-system-{{ .Values.bfl.username }}:6379/0
- name: TASK_EXECUTOR_MAX_THREADS
value: '6'
ports:
- containerPort: 8181
volumeMounts:
- name: upload-data
mountPath: /data/Home
- name: upload-appdata
mountPath: /appdata/
- name: userspace-app-dir
mountPath: /data/Application
- name: data-dir
mountPath: /data
# - name: terminus-upload-sidecar
# image: beclab/upload:v1.0.3
# env:
# - name: UPLOAD_FILE_TYPE
# value: '*'
# - name: UPLOAD_LIMITED_SIZE
# value: '21474836481'
# volumeMounts:
# - name: upload-data
# mountPath: /data/Home
# - name: upload-appdata
# mountPath: /appdata/
# - name: userspace-app-dir
# mountPath: /data/Application
# - name: uploads-temp
# mountPath: /uploadstemp
# resources: { }
# terminationMessagePath: /dev/termination-log
# terminationMessagePolicy: File
# imagePullPolicy: IfNotPresent
- name: terminus-envoy-sidecar
image: bytetrade/envoy:v1.25.11
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
ports:
- name: proxy-admin
containerPort: 15000
- name: proxy-inbound
containerPort: 15003
volumeMounts:
- name: terminus-sidecar-config
readOnly: true
mountPath: /etc/envoy/envoy.yaml
subPath: envoy.yaml
command:
- /usr/local/bin/envoy
- --log-level
- debug
- -c
- /etc/envoy/envoy.yaml
env:
- name: POD_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumes:
- name: data-dir
hostPath:
path: '{{ .Values.rootPath }}/rootfs/userspace'
type: Directory
- name: watch-dir
hostPath:
type: Directory
path: '{{ .Values.userspace.userData }}/Documents'
- name: userspace-dir
hostPath:
type: Directory
path: '{{ .Values.userspace.userData }}'
- name: userspace-app-dir
hostPath:
type: Directory
path: '{{ .Values.userspace.appData }}'
- name: fb-data
hostPath:
type: DirectoryOrCreate
path: '{{ .Values.userspace.appCache}}/files'
- name: upload-data
hostPath:
type: Directory
path: '{{ .Values.userspace.userData }}'
- name: upload-appdata
hostPath:
type: Directory
path: '{{ .Values.userspace.appCache}}'
- name: uploads-temp
hostPath:
type: DirectoryOrCreate
path: '{{ .Values.userspace.appCache }}/files/uploadstemp'
- name: terminus-sidecar-config
configMap:
name: sidecar-upload-configs
items:
- key: envoy.yaml
path: envoy.yaml
- name: app
emptyDir: {}
- name: nginx-confd
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: files-service
namespace: {{ .Release.Namespace }}
spec:
selector:
app: files
type: ClusterIP
ports:
- protocol: TCP
name: files
port: 80
targetPort: 80
---
apiVersion: sys.bytetrade.io/v1alpha1
kind: ProviderRegistry
metadata:
name: files-provider
namespace: user-system-{{ .Values.bfl.username }}
spec:
dataType: files
deployment: files
description: files provider
endpoint: files-service.{{ .Release.Namespace }}
group: service.files
kind: provider
namespace: {{ .Release.Namespace }}
opApis:
- name: Query
uri: /provider/query_file
- name: GetSearchFolderStatus
uri: /provider/get_search_folder_status
- name: UpdateSearchFolderPaths
uri: /provider/update_search_folder_paths
- name: GetDatasetFolderStatus
uri: /provider/get_dataset_folder_status
- name: UpdateDatasetFolderPaths
uri: /provider/update_dataset_folder_paths
version: v1
status:
state: active
#---
#apiVersion: sys.bytetrade.io/v1alpha1
#kind: ApplicationPermission
#metadata:
# name: files
# namespace: user-system-{{ .Values.bfl.username }}
#spec:
# app: files
# appid: files
# key: {{ .Values.os.files.appKey }}
# secret: {{ .Values.os.files.appSecret }}
# permissions:
# - dataType: gateway
# group: service.difyfusionclient
# ops:
# - DifyGatewayBaseProvider
# version: v1
#status:
# state: active
#---
#apiVersion: v1
#data:
# mappings: |
# {
# "properties": {
# "@timestamp": {
# "type": "date",
# "index": true,
# "store": false,
# "sortable": true,
# "aggregatable": true,
# "highlightable": false
# },
# "_id": {
# "type": "keyword",
# "index": true,
# "store": false,
# "sortable": true,
# "aggregatable": true,
# "highlightable": false
# },
# "content": {
# "type": "text",
# "index": true,
# "store": true,
# "sortable": false,
# "aggregatable": false,
# "highlightable": true
# },
# "created": {
# "type": "numeric",
# "index": true,
# "store": false,
# "sortable": true,
# "aggregatable": true,
# "highlightable": false
# },
# "format_name": {
# "type": "text",
# "index": true,
# "store": false,
# "sortable": false,
# "aggregatable": false,
# "highlightable": false
# },
# "md5": {
# "type": "text",
# "analyzer": "keyword",
# "index": true,
# "store": false,
# "sortable": false,
# "aggregatable": false,
# "highlightable": false
# },
# "name": {
# "type": "text",
# "index": true,
# "store": false,
# "sortable": false,
# "aggregatable": false,
# "highlightable": false
# },
# "size": {
# "type": "numeric",
# "index": true,
# "store": false,
# "sortable": true,
# "aggregatable": true,
# "highlightable": false
# },
# "updated": {
# "type": "numeric",
# "index": true,
# "store": false,
# "sortable": true,
# "aggregatable": true,
# "highlightable": false
# },
# "where": {
# "type": "text",
# "analyzer": "keyword",
# "index": true,
# "store": false,
# "sortable": false,
# "aggregatable": false,
# "highlightable": false
# }
# }
# }
#kind: ConfigMap
#metadata:
# name: zinc-files
# namespace: user-system-{{ .Values.bfl.username }}
---
apiVersion: v1
kind: Secret
metadata:
name: zinc-files-secrets
namespace: user-system-{{ .Values.bfl.username }}
type: Opaque
data:
password: {{ $password }}
redis_password: {{ $redis_password }}
pg_password: {{ $pg_password }}
---
apiVersion: v1
kind: Secret
metadata:
name: files-frontend-nats-secrets
namespace: user-system-{{ .Values.bfl.username }}
data:
files_frontend_nats_password: {{ $files_frontend_nats_password }}
type: Opaque
#---
#apiVersion: apr.bytetrade.io/v1alpha1
#kind: MiddlewareRequest
#metadata:
# name: zinc-files
# namespace: user-system-{{ .Values.bfl.username }}
#spec:
# app: files
# appNamespace: user-space-{{ .Values.bfl.username }}
# middleware: zinc
# zinc:
# user: zincuser-files-{{ .Values.bfl.username }}
# password:
# valueFrom:
# secretKeyRef:
# key: password
# name: zinc-files-secrets
# indexes:
# - name: zinc-files
# namespace: user-system-{{ .Values.bfl.username }}
# key: mappings
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: zinc-files-redis
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: files
appNamespace: user-space-{{ .Values.bfl.username }}
middleware: redis
redis:
password:
valueFrom:
secretKeyRef:
key: redis_password
name: zinc-files-secrets
namespace: zinc-files
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: files-frontend-nat
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: files-frontend
appNamespace: user-space-{{ .Values.bfl.username }}
middleware: nats
nats:
password:
valueFrom:
secretKeyRef:
key: files_frontend_nats_password
name: files-frontend-nats-secrets
refs:
- appName: files-server
appNamespace: os-system
subjects:
- name: files-notify
perm:
- pub
- sub
user: user-system-{{ .Values.bfl.username }}-files-frontend
---
apiVersion: v1
data:
envoy.yaml: |
admin:
access_log_path: "/dev/stdout"
address:
socket_address:
address: 0.0.0.0
port_value: 15000
static_resources:
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 15003
listener_filters:
- name: envoy.filters.listener.original_dst
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.listener.original_dst.v3.OriginalDst
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: desktop_http
upgrade_configs:
- upgrade_type: websocket
- upgrade_type: tailscale-control-protocol
skip_xff_append: false
max_request_headers_kb: 500
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: service
domains: ["*"]
routes:
- match:
prefix: "/upload"
route:
cluster: upload_original_dst
timeout: 1800s
idle_timeout: 1800s
- match:
prefix: "/"
route:
cluster: original_dst
timeout: 1800s
idle_timeout: 1800s
http_protocol_options:
accept_http_10: true
http_filters:
- name: envoy.filters.http.ext_authz
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthz
http_service:
path_prefix: '/api/verify/'
server_uri:
uri: authelia-backend.os-system:9091
cluster: authelia
timeout: 2s
authorization_request:
allowed_headers:
patterns:
- exact: accept
- exact: cookie
- exact: proxy-authorization
- prefix: x-unauth-
- exact: x-authorization
- exact: x-bfl-user
- exact: x-real-ip
- exact: terminus-nonce
headers_to_add:
- key: X-Forwarded-Method
value: '%REQ(:METHOD)%'
- key: X-Forwarded-Proto
value: '%REQ(:SCHEME)%'
- key: X-Forwarded-Host
value: '%REQ(:AUTHORITY)%'
- key: X-Forwarded-Uri
value: '%REQ(:PATH)%'
- key: X-Forwarded-For
value: '%DOWNSTREAM_REMOTE_ADDRESS_WITHOUT_PORT%'
authorization_response:
allowed_upstream_headers:
patterns:
- exact: authorization
- exact: proxy-authorization
- prefix: remote-
- prefix: authelia-
allowed_client_headers:
patterns:
- exact: set-cookie
allowed_client_headers_on_success:
patterns:
- exact: set-cookie
failure_mode_allow: false
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
- name: listener_image
address:
socket_address:
address: 127.0.0.1
port_value: 15080
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: tapr_http
http_protocol_options:
accept_http_10: true
upgrade_configs:
- upgrade_type: websocket
skip_xff_append: false
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: service
domains: ["*"]
routes:
- match:
prefix: "/images/upload"
route:
cluster: images
http_filters:
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
clusters:
- name: original_dst
connect_timeout: 120s
type: ORIGINAL_DST
lb_policy: CLUSTER_PROVIDED
common_http_protocol_options:
idle_timeout: 10s
- name: upload_original_dst
connect_timeout: 5000s
type: LOGICAL_DNS
dns_lookup_family: V4_ONLY
dns_refresh_rate: 600s
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: upload_original_dst
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: files-service.os-system
port_value: 80
- name: authelia
connect_timeout: 2s
type: LOGICAL_DNS
dns_lookup_family: V4_ONLY
dns_refresh_rate: 600s
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: authelia
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: authelia-backend.os-system
port_value: 9091
- name: images
connect_timeout: 5s
type: LOGICAL_DNS
dns_lookup_family: V4_ONLY
dns_refresh_rate: 600s
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: images
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: tapr-images-svc.user-system-{{ .Values.bfl.username }}
port_value: 8080
kind: ConfigMap
metadata:
name: sidecar-upload-configs
namespace: {{ .Release.Namespace }}

View File

@@ -1,48 +0,0 @@
bfl:
nodeport: 30883
nodeport_ingress_http: 30083
nodeport_ingress_https: 30082
username: 'test'
url: 'test'
nodeName: test
pvc:
userspace: test
userspace:
userData: test/Home
appData: test/Data
appCache: test
dbdata: test
docs:
nodeport: 30881
desktop:
nodeport: 30180
os:
portfolio:
appKey: '${ks[0]}'
appSecret: test
vault:
appKey: '${ks[0]}'
appSecret: test
desktop:
appKey: '${ks[0]}'
appSecret: test
message:
appKey: '${ks[0]}'
appSecret: test
rss:
appKey: '${ks[0]}'
appSecret: test
search:
appKey: '${ks[0]}'
appSecret: test
search2:
appKey: '${ks[0]}'
appSecret: test
agent:
appKey: '${ks[0]}'
appSecret: test
files:
appKey: '${ks[0]}'
appSecret: test
kubesphere:
redis_password: ""

View File

@@ -7,6 +7,24 @@
{{ $redis_password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $market_backend_nats_secret := (lookup "v1" "Secret" .Release.Namespace "market-backend-nats-secret") -}}
{{- $nats_password := "" -}}
{{ if $market_backend_nats_secret -}}
{{ $nats_password = (index $market_backend_nats_secret "data" "nats_password") }}
{{ else -}}
{{ $nats_password = randAlphaNum 16 | b64enc }}
{{- end -}}
---
apiVersion: v1
kind: Secret
metadata:
name: market-backend-nats-secret
namespace: {{ .Release.Namespace }}
type: Opaque
data:
nats_password: {{ $nats_password }}
---
apiVersion: v1
kind: Secret
metadata:
@@ -24,14 +42,7 @@ metadata:
namespace: {{ .Release.Namespace }}
labels:
app: appstore
applications.app.bytetrade.io/name: market
applications.app.bytetrade.io/owner: '{{ .Values.bfl.username }}'
applications.app.bytetrade.io/author: bytetrade.io
annotations:
applications.app.bytetrade.io/icon: https://file.bttcdn.com/appstore/appstore/icon.png
applications.app.bytetrade.io/title: Market
applications.app.bytetrade.io/version: '0.0.1'
applications.app.bytetrade.io/entrances: '[{"name":"appstore-service", "host":"appstore-service", "port":80,"title":"Market","windowPushState":true}]'
spec:
replicas: 1
selector:
@@ -46,8 +57,6 @@ spec:
instrumentation.opentelemetry.io/inject-go: "olares-instrumentation"
instrumentation.opentelemetry.io/go-container-names: "appstore-backend"
instrumentation.opentelemetry.io/otel-go-auto-target-exe: "/opt/app/market"
instrumentation.opentelemetry.io/inject-nginx: "olares-instrumentation"
instrumentation.opentelemetry.io/inject-nginx-container-names: "appstore"
spec:
priorityClassName: "system-cluster-critical"
initContainers:
@@ -88,32 +97,8 @@ spec:
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: nginx-init
image: beclab/market-frontend:v0.3.12
imagePullPolicy: IfNotPresent
volumeMounts:
- name: app
mountPath: /cp_app
- name: nginx-confd
mountPath: /confd
command:
- sh
- -c
- |
cp -rf /app/* /cp_app/. && cp -rf /etc/nginx/conf.d/* /confd/.
fieldPath: status.podIP
containers:
- name: appstore
image: beclab/docker-nginx-headers-more:ubuntu-v0.1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: app
mountPath: /app
- name: nginx-confd
mountPath: /etc/nginx/conf.d
- name: appstore-backend
image: beclab/market-backend:v0.3.12
imagePullPolicy: IfNotPresent
@@ -151,7 +136,19 @@ spec:
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NATS_HOST
value: nats.user-system-{{ .Values.bfl.username }}
- name: NATS_PORT
value: '4222'
- name: NATS_USERNAME
value: market-backend-{{ .Values.bfl.username}}
- name: NATS_PASSWORD
valueFrom:
secretKeyRef:
name: market-backend-nats-secret
key: nats_password
- name: NATS_SUBJECT_USER_APPLICATION
value: terminus.user.application.{{ .Values.bfl.username}}
volumeMounts:
- name: opt-data
mountPath: /opt/app/data
@@ -234,10 +231,6 @@ spec:
app: appstore
type: ClusterIP
ports:
- protocol: TCP
name: appstore
port: 80
targetPort: 80
- protocol: TCP
name: appstore-backend
port: 81
@@ -306,4 +299,55 @@ spec:
secretKeyRef:
key: redis-passwords
name: market-secrets
namespace: market
namespace: market
---
apiVersion: v1
kind: Service
metadata:
name: appstore-svc
namespace: {{ .Release.Namespace }}
spec:
type: ClusterIP
selector:
app: appstore
ports:
- name: "appstore-backend"
protocol: TCP
port: 81
targetPort: 81
- name: "appstore-websocket"
protocol: TCP
port: 40010
targetPort: 40010
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: market-backend-nats
namespace: {{ .Release.Namespace }}
spec:
app: market-backend
appNamespace: user
middleware: nats
nats:
password:
valueFrom:
secretKeyRef:
key: nats_password
name: market-backend-nats-secret
refs:
- appName: user-service
appNamespace: user
subjects:
- name: "application.*"
perm:
- pub
- sub
- appName: user-service
appNamespace: user
subjects:
- name: "market.*"
perm:
- pub
- sub
user: market-backend-{{ .Values.bfl.username}}

File diff suppressed because it is too large Load Diff

View File

@@ -1,26 +0,0 @@
apiVersion: v2
name: vault
description: A Helm chart for Kubernetes
maintainers:
- name: bytetrade
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.0.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"

View File

@@ -1,302 +0,0 @@
{{- $namespace := printf "%s%s" "user-system-" .Values.bfl.username -}}
{{- $vault_nats_secret := (lookup "v1" "Secret" $namespace "vault-nats-secrets") -}}
{{- $vault_nats_password := "" -}}
{{ if $vault_nats_secret -}}
{{ $vault_nats_password = (index $vault_nats_secret "data" "vault_nats_password") }}
{{ else -}}
{{ $vault_nats_password = randAlphaNum 16 | b64enc }}
{{- end -}}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: vault-deployment
namespace: {{ .Release.Namespace }}
labels:
app: vault
applications.app.bytetrade.io/name: vault
applications.app.bytetrade.io/owner: '{{ .Values.bfl.username }}'
applications.app.bytetrade.io/author: bytetrade.io
annotations:
applications.app.bytetrade.io/icon: https://file.bttcdn.com/appstore/vault/icon.png
applications.app.bytetrade.io/title: Vault
applications.app.bytetrade.io/version: '0.0.1'
applications.app.bytetrade.io/entrances: '[{"name":"vault", "host":"vault-service", "port":80,"title":"Vault","windowPushState":true}]'
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: vault
template:
metadata:
labels:
app: vault
io.bytetrade.app: "true"
spec:
initContainers:
- args:
- -it
- authelia-backend.os-system:9091
image: owncloudci/wait-for:latest
imagePullPolicy: IfNotPresent
name: check-auth
- args:
- -it
- nats.user-system-{{ .Values.bfl.username }}:4222
image: owncloudci/wait-for:latest
imagePullPolicy: IfNotPresent
name: check-nats
- name: terminus-sidecar-init
image: openservicemesh/init:v1.2.3
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
capabilities:
add:
- NET_ADMIN
runAsNonRoot: false
runAsUser: 0
command:
- /bin/sh
- -c
- |
iptables-restore --noflush <<EOF
# sidecar interception rules
*nat
:PROXY_IN_REDIRECT - [0:0]
:PROXY_INBOUND - [0:0]
-A PROXY_IN_REDIRECT -p tcp -j REDIRECT --to-port 15003
-A PROXY_INBOUND -p tcp --dport 15000 -j RETURN
-A PROXY_INBOUND -p tcp -j PROXY_IN_REDIRECT
-A PREROUTING -p tcp -j PROXY_INBOUND
COMMIT
EOF
env:
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
containers:
- name: vault-frontend
image: beclab/vault-frontend:v1.3.55
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
- name: notification-server
image: beclab/vault-notification:v1.3.55
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3010
env:
{{- range $key, $val := .Values.terminusGlobalEnvs }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end }}
- name: OS_SYSTEM_SERVER
value: system-server.user-system-{{ .Values.bfl.username }}
- name: OS_APP_SECRET
value: '{{ .Values.os.vault.appSecret }}'
- name: OS_APP_KEY
value: {{ .Values.os.vault.appKey }}
- name: NATS_HOST
value: nats.user-system-{{ .Values.bfl.username }}
- name: NATS_PORT
value: '4222'
- name: NATS_USERNAME
value: user-system-{{ .Values.bfl.username }}-vault
- name: NATS_PASSWORD
value: {{ $vault_nats_password | b64dec }}
- name: NATS_SUBJECT
value: terminus.os-system.files-notify
- name: terminus-envoy-sidecar
image: bytetrade/envoy:v1.25.11
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
ports:
- name: proxy-admin
containerPort: 15000
- name: proxy-inbound
containerPort: 15003
volumeMounts:
- name: terminus-sidecar-config
readOnly: true
mountPath: /etc/envoy/envoy.yaml
subPath: envoy.yaml
command:
- /usr/local/bin/envoy
- --log-level
- debug
- -c
- /etc/envoy/envoy.yaml
env:
- name: POD_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: terminus-ws-sidecar
image: 'beclab/ws-gateway:v1.0.3'
imagePullPolicy: IfNotPresent
command:
- /ws-gateway
env:
- name: WS_PORT
value: '3010'
- name: WS_URL
value: /websocket/message
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumes:
# - name: vault-data
# hostPath:
# type: DirectoryOrCreate
# path: {{ .Values.userspace.appCache}}/vault/data
# - name: vault-sign
# hostPath:
# type: DirectoryOrCreate
# path: {{ .Values.userspace.appCache}}/vault/sign
# - name: vault-attach
# hostPath:
# type: DirectoryOrCreate
# path: {{ .Values.userspace.appCache}}/vault/attachments
- name: terminus-sidecar-config
configMap:
name: sidecar-ws-configs
items:
- key: envoy.yaml
path: envoy.yaml
---
apiVersion: v1
kind: Service
metadata:
name: vault-service
namespace: {{ .Release.Namespace }}
spec:
type: ClusterIP
selector:
app: vault
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: vault-server
namespace: {{ .Release.Namespace }}
spec:
type: ExternalName
externalName: vault-server.os-system.svc.cluster.local
ports:
- protocol: TCP
port: 3000
targetPort: 3000
---
apiVersion: sys.bytetrade.io/v1alpha1
kind: ProviderRegistry
metadata:
name: vault-notification
namespace: user-system-{{ .Values.bfl.username }}
spec:
dataType: notification
deployment: vault
description: send notification to desktop client
endpoint: vault-service.{{ .Release.Namespace }}
group: service.vault
kind: provider
namespace: {{ .Release.Namespace }}
opApis:
- name: Create
uri: /notification/create
- name: Query
uri: /notification/query
version: v1
status:
state: active
---
apiVersion: sys.bytetrade.io/v1alpha1
kind: ApplicationPermission
metadata:
name: vault
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: vault
appid: vault
key: {{ .Values.os.vault.appKey }}
secret: {{ .Values.os.vault.appSecret }}
permissions:
- dataType: token
group: service.notification
ops:
- Create
version: v1
status:
state: active
---
apiVersion: v1
kind: Secret
metadata:
name: vault-nats-secrets
namespace: user-system-{{ .Values.bfl.username }}
data:
vault_nats_password: {{ $vault_nats_password }}
type: Opaque
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: vault-nat
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: vault
appNamespace: user-space-{{ .Values.bfl.username }}
middleware: nats
nats:
password:
valueFrom:
secretKeyRef:
key: vault_nats_password
name: vault-nats-secrets
refs:
- appName: files-server
appNamespace: os-system
subjects:
- name: files-notify
perm:
- pub
- sub
user: user-system-{{ .Values.bfl.username }}-vault

View File

@@ -1,42 +0,0 @@
bfl:
nodeport: 30883
nodeport_ingress_http: 30083
nodeport_ingress_https: 30082
username: 'test'
url: 'test'
nodeName: test
pvc:
userspace: test
userspace:
userData: test/Home
appData: test/Data
appCache: test
dbdata: test
docs:
nodeport: 30881
desktop:
nodeport: 30180
os:
portfolio:
appKey: '${ks[0]}'
appSecret: test
vault:
appKey: '${ks[0]}'
appSecret: test
desktop:
appKey: '${ks[0]}'
appSecret: test
message:
appKey: '${ks[0]}'
appSecret: test
rss:
appKey: '${ks[0]}'
appSecret: test
search:
appKey: '${ks[0]}'
appSecret: test
search2:
appKey: '${ks[0]}'
appSecret: test
kubesphere:
redis_password: ""

View File

@@ -48,7 +48,7 @@ if (-Not (Test-Path $CLI_PROGRAM_PATH)) {
New-Item -Path $CLI_PROGRAM_PATH -ItemType Directory
}
$CLI_VERSION = "0.2.35"
$CLI_VERSION = "$version"
$CLI_FILE = "olares-cli-v{0}_windows_{1}.tar.gz" -f $CLI_VERSION, $arch
$CLI_URL = "{0}/{1}" -f $downloadUrl, $CLI_FILE
$CLI_PATH = "{0}{1}" -f $CLI_PROGRAM_PATH, $CLI_FILE

View File

@@ -10,7 +10,7 @@ function command_exists() {
if [[ x"$VERSION" == x"" ]]; then
if [[ "$LOCAL_RELEASE" == "1" ]]; then
ts=$(date +%Y%m%d%H%M%S)
export VERSION="0.0.0-local-dev-$ts"
export VERSION="1.12.0-$ts"
echo "will build and use a local release of Olares with version: $VERSION"
echo ""
else
@@ -74,53 +74,60 @@ if [ -z ${cdn_url} ]; then
cdn_url="https://dc3p1870nn3cj.cloudfront.net"
fi
CLI_VERSION="0.2.35"
CLI_FILE="olares-cli-v${CLI_VERSION}_linux_${ARCH}.tar.gz"
CLI_FILE="olares-cli-v${VERSION}_linux_${ARCH}.tar.gz"
if [[ x"$os_type" == x"Darwin" ]]; then
CLI_FILE="olares-cli-v${CLI_VERSION}_darwin_${ARCH}.tar.gz"
CLI_FILE="olares-cli-v${VERSION}_darwin_${ARCH}.tar.gz"
fi
if command_exists olares-cli && [[ "$(olares-cli -v | awk '{print $3}')" == "$CLI_VERSION" ]]; then
if [[ "$LOCAL_RELEASE" == "1" ]]; then
if ! command_exists olares-cli ; then
echo "error: LOCAL_RELEASE specified but olares-cli not found"
exit 1
fi
INSTALL_OLARES_CLI=$(which olares-cli)
echo "olares-cli already installed and is the expected version"
echo ""
else
if [[ ! -f ${CLI_FILE} ]]; then
CLI_URL="${cdn_url}/${CLI_FILE}"
echo "downloading Olares installer from ${CLI_URL} ..."
if command_exists olares-cli && [[ "$(olares-cli -v | awk '{print $3}')" == "$VERSION" ]]; then
INSTALL_OLARES_CLI=$(which olares-cli)
echo "olares-cli already installed and is the expected version"
echo ""
else
if [[ ! -f ${CLI_FILE} ]]; then
CLI_URL="${cdn_url}/${CLI_FILE}"
curl -Lo ${CLI_FILE} ${CLI_URL}
echo "downloading Olares installer from ${CLI_URL} ..."
echo ""
curl -Lo ${CLI_FILE} ${CLI_URL}
if [[ $? -ne 0 ]]; then
echo "error: failed to download Olares installer"
exit 1
else
echo "Olares installer ${VERSION} download complete!"
echo ""
fi
fi
INSTALL_OLARES_CLI="/usr/local/bin/olares-cli"
echo "unpacking Olares installer to $INSTALL_OLARES_CLI..."
echo ""
tar -zxf ${CLI_FILE} olares-cli && chmod +x olares-cli
if [[ x"$os_type" == x"Darwin" ]]; then
if [ ! -f "/usr/local/Cellar/olares" ]; then
current_user=$(whoami)
$sh_c "sudo mkdir -p /usr/local/Cellar/olares && sudo chown ${current_user}:staff /usr/local/Cellar/olares"
fi
$sh_c "mv olares-cli /usr/local/Cellar/olares/olares-cli && \
sudo rm -rf /usr/local/bin/olares-cli && \
sudo ln -s /usr/local/Cellar/olares/olares-cli $INSTALL_OLARES_CLI"
else
$sh_c "mv olares-cli $INSTALL_OLARES_CLI"
fi
if [[ $? -ne 0 ]]; then
echo "error: failed to download Olares installer"
echo "error: failed to unpack Olares installer"
exit 1
else
echo "Olares installer ${CLI_VERSION} download complete!"
echo ""
fi
fi
INSTALL_OLARES_CLI="/usr/local/bin/olares-cli"
echo "unpacking Olares installer to $INSTALL_OLARES_CLI..."
echo ""
tar -zxf ${CLI_FILE} olares-cli && chmod +x olares-cli
if [[ x"$os_type" == x"Darwin" ]]; then
if [ ! -f "/usr/local/Cellar/olares" ]; then
current_user=$(whoami)
$sh_c "sudo mkdir -p /usr/local/Cellar/olares && sudo chown ${current_user}:staff /usr/local/Cellar/olares"
fi
$sh_c "mv olares-cli /usr/local/Cellar/olares/olares-cli && \
sudo rm -rf /usr/local/bin/olares-cli && \
sudo ln -s /usr/local/Cellar/olares/olares-cli $INSTALL_OLARES_CLI"
else
$sh_c "mv olares-cli $INSTALL_OLARES_CLI"
fi
if [[ $? -ne 0 ]]; then
echo "error: failed to unpack Olares installer"
exit 1
fi
fi
PARAMS="--version $VERSION --base-dir $BASE_DIR"

View File

@@ -145,6 +145,14 @@ if ! command_exists tar; then
exit 1
fi
export VERSION="#__VERSION__"
if [[ "x${VERSION}" == "x" || "x${VERSION:3}" == "xVERSION__" ]]; then
echo "error: Olares version is unspecified, please set the VERSION env var and rerun this script."
echo "for example: VERSION=1.12.0-20241124 bash $0"
exit 1
fi
BASE_DIR="$HOME/.olares"
if [ ! -d $BASE_DIR ]; then
mkdir -p $BASE_DIR
@@ -157,10 +165,9 @@ fi
set_master_host_ssh_options
CLI_VERSION="0.2.35"
CLI_FILE="olares-cli-v${CLI_VERSION}_linux_${ARCH}.tar.gz"
CLI_FILE="olares-cli-v${VERSION}_linux_${ARCH}.tar.gz"
if command_exists olares-cli && [[ "$(olares-cli -v | awk '{print $3}')" == "$CLI_VERSION" ]]; then
if command_exists olares-cli && [[ "$(olares-cli -v | awk '{print $3}')" == "$VERSION" ]]; then
INSTALL_OLARES_CLI=$(which olares-cli)
echo "olares-cli already installed and is the expected version"
echo ""
@@ -177,7 +184,7 @@ else
echo "error: failed to download Olares installer"
exit 1
else
echo "Olares installer ${CLI_VERSION} download complete!"
echo "Olares installer ${VERSION} download complete!"
echo ""
fi
fi

View File

@@ -44,6 +44,7 @@ while read line; do
echo "$filename,$path,$deps,$url_amd64,$checksum_amd64,$url_arm64,$checksum_arm64,$fileid" >> $manifest_file
done < components
sed -i "s/#__VERSION__/${VERSION}/g" $manifest_file
path="images"
for deps in "images.mf"; do

View File

@@ -3,7 +3,7 @@
BASE_DIR=$(dirname $(realpath -s $0))
rm -rf ${BASE_DIR}/../.dist
DIST_PATH="${BASE_DIR}/../.dist/install-wizard"
VERSION=$1
export VERSION=$1
DIST_PATH=${DIST_PATH} bash ${BASE_DIR}/package.sh
@@ -36,6 +36,7 @@ if [ ! -z $VERSION ]; then
sh -c "$SED 's/#__VERSION__/${VERSION}/' wizard/config/settings/templates/terminus_cr.yaml"
sh -c "$SED 's/#__VERSION__/${VERSION}/' install.sh"
sh -c "$SED 's/#__VERSION__/${VERSION}/' install.ps1"
sh -c "$SED 's/#__VERSION__/${VERSION}/' joincluster.sh"
VERSION="v${VERSION}"
else
VERSION="debug"

View File

@@ -75,3 +75,5 @@ find $BASE_DIR/../ -type f -name Olares.yaml | while read f; do
unset bins
done
sed -i "s/#__VERSION__/${VERSION}/g" ${manifest}

41
cli/.goreleaser.yaml Normal file
View File

@@ -0,0 +1,41 @@
project_name: olares-cli
builds:
- env:
- CGO_ENABLED=0
binary: olares-cli
main: ./cmd/main.go
goos:
- linux
- darwin
- windows
goarch:
- amd64
- arm
- arm64
goarm:
- 7
ignore:
- goos: linux
goarch: arm64
- goos: darwin
goarch: arm
- goos: windows
goarch: arm
ldflags:
- -s
- -w
- -X bytetrade.io/web3os/installer/version.VERSION={{ .Version }}
dist: ./output
archives:
- id: olares-cli
name_template: "{{ .ProjectName }}-v{{ .Version }}_{{ .Os }}_{{ .Arch }}"
replacements:
linux: linux
amd64: amd64
arm: arm64
checksum:
name_template: "checksums.txt"
release:
disable: true
changelog:
skip: true

1
cli/README.md Executable file
View File

@@ -0,0 +1 @@
# installer

View File

@@ -0,0 +1,43 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
type Addon struct {
Name string `yaml:"name" json:"name,omitempty"`
Namespace string `yaml:"namespace" json:"namespace,omitempty"`
Sources Sources `yaml:"sources" json:"sources,omitempty"`
Retries int `yaml:"retries" json:"retries,omitempty"`
Delay int `yaml:"delay" json:"delay,omitempty"`
}
type Sources struct {
Chart Chart `yaml:"chart" json:"chart,omitempty"`
Yaml Yaml `yaml:"yaml" json:"yaml,omitempty"`
}
type Chart struct {
Name string `yaml:"name" json:"name,omitempty"`
Repo string `yaml:"repo" json:"repo,omitempty"`
Path string `yaml:"path" json:"path,omitempty"`
Version string `yaml:"version" json:"version,omitempty"`
ValuesFile string `yaml:"valuesFile" json:"valuesFile,omitempty"`
Values []string `yaml:"values" json:"values,omitempty"`
}
type Yaml struct {
Path []string `yaml:"path" json:"path,omitempty"`
}

View File

@@ -0,0 +1,403 @@
/*
Copyright 2021.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
"fmt"
"regexp"
"strconv"
"strings"
"bytetrade.io/web3os/installer/pkg/core/logger"
"bytetrade.io/web3os/installer/pkg/core/util"
"github.com/pkg/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// EDIT THIS FILE! THIS IS SCAFFOLDING FOR YOU TO OWN!
// NOTE: json tags are required. Any new fields you add must have json tags for the fields to be serialized.
// ClusterSpec defines the desired state of Cluster
type ClusterSpec struct {
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
// Important: Run "make" to regenerate code after modifying this file
// Foo is an example field of Cluster. Edit Cluster_types.go to remove/update
Hosts []HostCfg `yaml:"hosts" json:"hosts,omitempty"`
RoleGroups RoleGroups `yaml:"roleGroups" json:"roleGroups,omitempty"`
ControlPlaneEndpoint ControlPlaneEndpoint `yaml:"controlPlaneEndpoint" json:"controlPlaneEndpoint,omitempty"`
Kubernetes Kubernetes `yaml:"kubernetes" json:"kubernetes,omitempty"`
Network NetworkConfig `yaml:"network" json:"network,omitempty"`
Registry RegistryConfig `yaml:"registry" json:"registry,omitempty"`
Addons []Addon `yaml:"addons" json:"addons,omitempty"`
KubeSphere KubeSphere `json:"kubesphere,omitempty"`
}
// ClusterStatus defines the observed state of Cluster
type ClusterStatus struct {
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
// Important: Run "make" to regenerate code after modifying this file
JobInfo JobInfo `json:"jobInfo,omitempty"`
Version string `json:"version,omitempty"`
NetworkPlugin string `json:"networkPlugin,omitempty"`
NodesCount int `json:"nodesCount,omitempty"`
EtcdCount int `json:"etcdCount,omitempty"`
MasterCount int `json:"masterCount,omitempty"`
WorkerCount int `json:"workerCount,omitempty"`
Nodes []NodeStatus `json:"nodes,omitempty"`
Conditions []Condition `json:"Conditions,omitempty"`
}
// JobInfo defines the job information to be used to create a cluster or add a node.
type JobInfo struct {
Namespace string `json:"namespace,omitempty"`
Name string `json:"name,omitempty"`
Pods []PodInfo `json:"pods,omitempty"`
}
// PodInfo defines the pod information to be used to create a cluster or add a node.
type PodInfo struct {
Name string `json:"name,omitempty"`
Containers []ContainerInfo `json:"containers,omitempty"`
}
// ContainerInfo defines the container information to be used to create a cluster or add a node.
type ContainerInfo struct {
Name string `json:"name,omitempty"`
}
// NodeStatus defines the status information of the nodes in the cluster.
type NodeStatus struct {
InternalIP string `json:"internalIP,omitempty"`
Hostname string `json:"hostname,omitempty"`
Roles map[string]bool `json:"roles,omitempty"`
}
// Condition defines the process information.
type Condition struct {
Step string `json:"step,omitempty"`
StartTime metav1.Time `json:"startTime,omitempty"`
EndTime metav1.Time `json:"endTime,omitempty"`
Status bool `json:"status,omitempty"`
}
// +genclient
// +genclient:nonNamespaced
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// Cluster is the Schema for the clusters API
// +kubebuilder:resource:path=clusters,scope=Cluster
type Cluster struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ClusterSpec `json:"spec,omitempty"`
Status ClusterStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
// ClusterList contains a list of Cluster
type ClusterList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []Cluster `json:"items"`
}
func init() {
SchemeBuilder.Register(&Cluster{}, &ClusterList{})
}
// HostCfg defines host information for cluster.
type HostCfg struct {
Name string `yaml:"name,omitempty" json:"name,omitempty"`
Address string `yaml:"address,omitempty" json:"address,omitempty"`
InternalAddress string `yaml:"internalAddress,omitempty" json:"internalAddress,omitempty"`
Port int `yaml:"port,omitempty" json:"port,omitempty"`
User string `yaml:"user,omitempty" json:"user,omitempty"`
Password string `yaml:"password,omitempty" json:"password,omitempty"`
PrivateKey string `yaml:"privateKey,omitempty" json:"privateKey,omitempty"`
PrivateKeyPath string `yaml:"privateKeyPath,omitempty" json:"privateKeyPath,omitempty"`
Arch string `yaml:"arch,omitempty" json:"arch,omitempty"`
Labels map[string]string `yaml:"labels,omitempty" json:"labels,omitempty"`
ID string `yaml:"id,omitempty" json:"id,omitempty"`
Index int `json:"-"`
IsEtcd bool `json:"-"`
IsMaster bool `json:"-"`
IsWorker bool `json:"-"`
EtcdExist bool `json:"-"`
EtcdName string `json:"-"`
}
// RoleGroups defines the grouping of role for hosts (etcd / master / worker).
type RoleGroups struct {
Etcd []string `yaml:"etcd" json:"etcd,omitempty"`
Master []string `yaml:"master" json:"master,omitempty"`
Worker []string `yaml:"worker" json:"worker,omitempty"`
}
// HostGroups defines the grouping of hosts for cluster (all / etcd / master / worker / k8s).
type HostGroups struct {
All []HostCfg
Etcd []HostCfg
Master []HostCfg
Worker []HostCfg
K8s []HostCfg
}
// ControlPlaneEndpoint defines the control plane endpoint information for cluster.
type ControlPlaneEndpoint struct {
InternalLoadbalancer string `yaml:"internalLoadbalancer" json:"internalLoadbalancer,omitempty"`
Domain string `yaml:"domain" json:"domain,omitempty"`
Address string `yaml:"address" json:"address,omitempty"`
Port int `yaml:"port" json:"port,omitempty"`
}
// RegistryConfig defines the configuration information of the image's repository.
type RegistryConfig struct {
RegistryMirrors []string `yaml:"registryMirrors" json:"registryMirrors,omitempty"`
InsecureRegistries []string `yaml:"insecureRegistries" json:"insecureRegistries,omitempty"`
PrivateRegistry string `yaml:"privateRegistry" json:"privateRegistry,omitempty"`
}
// KubeSphere defines the configuration information of the KubeSphere.
type KubeSphere struct {
Enabled bool `json:"enabled,omitempty"`
Version string `json:"version,omitempty"`
Configurations string `json:"configurations,omitempty"`
}
// ExternalEtcd defines configuration information of external etcd.
type ExternalEtcd struct {
Endpoints []string
CaFile string
CertFile string
KeyFile string
}
// Copy is used to create a copy for Runtime.
func (h *HostCfg) Copy() *HostCfg {
host := *h
return &host
}
// GenerateCertSANs is used to generate cert sans for cluster.
func (cfg *ClusterSpec) GenerateCertSANs() []string {
clusterSvc := fmt.Sprintf("kubernetes.default.svc.%s", cfg.Kubernetes.ClusterName)
defaultCertSANs := []string{"kubernetes", "kubernetes.default", "kubernetes.default.svc", clusterSvc, "localhost", "127.0.0.1"}
extraCertSANs := []string{}
extraCertSANs = append(extraCertSANs, cfg.ControlPlaneEndpoint.Domain)
extraCertSANs = append(extraCertSANs, cfg.ControlPlaneEndpoint.Address)
for _, host := range cfg.Hosts {
extraCertSANs = append(extraCertSANs, host.Name)
extraCertSANs = append(extraCertSANs, fmt.Sprintf("%s.%s", host.Name, cfg.Kubernetes.ClusterName))
if host.Address != cfg.ControlPlaneEndpoint.Address {
extraCertSANs = append(extraCertSANs, host.Address)
}
if host.InternalAddress != host.Address && host.InternalAddress != cfg.ControlPlaneEndpoint.Address {
extraCertSANs = append(extraCertSANs, host.InternalAddress)
}
}
extraCertSANs = append(extraCertSANs, util.ParseIp(cfg.Network.KubeServiceCIDR)[0])
defaultCertSANs = append(defaultCertSANs, extraCertSANs...)
if cfg.Kubernetes.ApiserverCertExtraSans != nil {
defaultCertSANs = append(defaultCertSANs, cfg.Kubernetes.ApiserverCertExtraSans...)
}
return defaultCertSANs
}
// GroupHosts is used to group hosts according to the configuration file.s
func (cfg *ClusterSpec) GroupHosts() (*HostGroups, error) {
clusterHostsGroups := HostGroups{}
hostList := map[string]string{}
for _, host := range cfg.Hosts {
hostList[host.Name] = host.Name
}
etcdGroup, masterGroup, workerGroup, err := cfg.ParseRolesList(hostList)
if err != nil {
return nil, err
}
for index, host := range cfg.Hosts {
host.Index = index
if len(etcdGroup) > 0 {
for _, hostName := range etcdGroup {
if host.Name == hostName {
host.IsEtcd = true
break
}
}
}
if len(masterGroup) > 0 {
for _, hostName := range masterGroup {
if host.Name == hostName {
host.IsMaster = true
break
}
}
}
if len(workerGroup) > 0 {
for _, hostName := range workerGroup {
if hostName != "" && host.Name == hostName {
host.IsWorker = true
break
}
}
}
if host.IsEtcd {
clusterHostsGroups.Etcd = append(clusterHostsGroups.Etcd, host)
}
if host.IsMaster {
clusterHostsGroups.Master = append(clusterHostsGroups.Master, host)
}
if host.IsWorker {
clusterHostsGroups.Worker = append(clusterHostsGroups.Worker, host)
}
if host.IsMaster || host.IsWorker {
clusterHostsGroups.K8s = append(clusterHostsGroups.K8s, host)
}
clusterHostsGroups.All = append(clusterHostsGroups.All, host)
}
//Check that the parameters under roleGroups are incorrect
if len(masterGroup) == 0 {
logger.Fatal(errors.New("The number of master cannot be 0"))
}
if len(etcdGroup) == 0 {
logger.Fatal(errors.New("The number of etcd cannot be 0"))
}
if len(masterGroup) != len(clusterHostsGroups.Master) {
return nil, errors.New("Incorrect nodeName under roleGroups/master in the configuration file")
}
if len(etcdGroup) != len(clusterHostsGroups.Etcd) {
return nil, errors.New("Incorrect nodeName under roleGroups/etcd in the configuration file")
}
if len(workerGroup) != len(clusterHostsGroups.Worker) {
return nil, errors.New("Incorrect nodeName under roleGroups/work in the configuration file")
}
return &clusterHostsGroups, nil
}
// ClusterIP is used to get the kube-apiserver service address inside the cluster.
func (cfg *ClusterSpec) ClusterIP() string {
return util.ParseIp(cfg.Network.KubeServiceCIDR)[0]
}
// CorednsClusterIP is used to get the coredns service address inside the cluster.
func (cfg *ClusterSpec) CorednsClusterIP() string {
return util.ParseIp(cfg.Network.KubeServiceCIDR)[2]
}
// ClusterDNS is used to get the dns server address inside the cluster.
func (cfg *ClusterSpec) ClusterDNS() string {
if cfg.Kubernetes.EnableNodelocaldns() {
return "169.254.25.10"
}
return cfg.CorednsClusterIP()
}
// ParseRolesList is used to parse the host grouping list.
func (cfg *ClusterSpec) ParseRolesList(hostList map[string]string) ([]string, []string, []string, error) {
etcdGroupList := []string{}
masterGroupList := []string{}
workerGroupList := []string{}
for _, host := range cfg.RoleGroups.Etcd {
if strings.Contains(host, "[") && strings.Contains(host, "]") && strings.Contains(host, ":") {
etcdGroupList = append(etcdGroupList, getHostsRange(host, hostList, "etcd")...)
} else {
if err := hostVerify(hostList, host, "etcd"); err != nil {
logger.Fatal(err)
}
etcdGroupList = append(etcdGroupList, host)
}
}
for _, host := range cfg.RoleGroups.Master {
if strings.Contains(host, "[") && strings.Contains(host, "]") && strings.Contains(host, ":") {
masterGroupList = append(masterGroupList, getHostsRange(host, hostList, "master")...)
} else {
if err := hostVerify(hostList, host, "master"); err != nil {
logger.Fatal(err)
}
masterGroupList = append(masterGroupList, host)
}
}
for _, host := range cfg.RoleGroups.Worker {
if strings.Contains(host, "[") && strings.Contains(host, "]") && strings.Contains(host, ":") {
workerGroupList = append(workerGroupList, getHostsRange(host, hostList, "worker")...)
} else {
if err := hostVerify(hostList, host, "worker"); err != nil {
logger.Fatal(err)
}
workerGroupList = append(workerGroupList, host)
}
}
return etcdGroupList, masterGroupList, workerGroupList, nil
}
func getHostsRange(rangeStr string, hostList map[string]string, group string) []string {
hostRangeList := []string{}
r := regexp.MustCompile(`\[(\d+)\:(\d+)\]`)
nameSuffix := r.FindStringSubmatch(rangeStr)
namePrefix := strings.Split(rangeStr, nameSuffix[0])[0]
nameSuffixStart, _ := strconv.Atoi(nameSuffix[1])
nameSuffixEnd, _ := strconv.Atoi(nameSuffix[2])
for i := nameSuffixStart; i <= nameSuffixEnd; i++ {
if err := hostVerify(hostList, fmt.Sprintf("%s%d", namePrefix, i), group); err != nil {
logger.Fatal(err)
}
hostRangeList = append(hostRangeList, fmt.Sprintf("%s%d", namePrefix, i))
}
return hostRangeList
}
func hostVerify(hostList map[string]string, hostName string, group string) error {
if _, ok := hostList[hostName]; !ok {
return fmt.Errorf("[%s] is in [%s] group, but not in hosts list", hostName, group)
}
return nil
}
const (
Haproxy = "haproxy"
)
func (c ControlPlaneEndpoint) IsInternalLBEnabled() bool {
if c.InternalLoadbalancer == Haproxy {
return true
}
return false
}

View File

@@ -0,0 +1,276 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
"fmt"
"os"
"strings"
"bytetrade.io/web3os/installer/pkg/core/util"
)
const (
DefaultPreDir = "kubekey"
DefaultTmpDir = "/tmp/kubekey"
DefaultSSHPort = 22
DefaultLBPort = 6443
DefaultLBDomain = "lb.kubesphere.local"
DefaultNetworkPlugin = "calico"
DefaultPodsCIDR = "10.233.64.0/18"
DefaultServiceCIDR = "10.233.0.0/18"
DefaultKubeImageNamespace = "kubesphere"
DefaultClusterName = "cluster.local"
DefaultArch = "amd64"
DefaultEtcdVersion = "v3.4.13"
DefaultEtcdPort = "2379"
DefaultDockerVersion = "20.10.8"
DefaultCrictlVersion = "v1.22.0"
DefaultKubeVersion = "v1.21.5"
DefaultCalicoVersion = "v3.20.0"
DefaultFlannelVersion = "v0.12.0"
DefaultCniVersion = "v0.9.1"
DefaultCiliumVersion = "v1.8.3"
DefaultKubeovnVersion = "v1.5.0"
DefaultHelmVersion = "v3.6.3"
DefaultMaxPods = 200
DefaultNodeCidrMaskSize = 24
DefaultIPIPMode = "Always"
DefaultVXLANMode = "Never"
DefaultVethMTU = 1440
DefaultBackendMode = "vxlan"
DefaultProxyMode = "ipvs"
DefaultCrioEndpoint = "unix:///var/run/crio/crio.sock"
DefaultContainerdEndpoint = "unix:///run/containerd/containerd.sock"
DefaultIsulaEndpoint = "unix:///var/run/isulad.sock"
Etcd = "etcd"
Master = "master"
Worker = "worker"
K8s = "k8s"
DefaultEtcdBackupDir = "/var/backups/kube_etcd"
DefaultEtcdBackupPeriod = 30
DefaultKeepBackNumber = 5
DefaultEtcdBackupScriptDir = "/usr/local/bin/kube-scripts"
DefaultJoinCIDR = "100.64.0.0/16"
DefaultNetworkType = "geneve"
DefaultVlanID = "100"
DefaultOvnLabel = "node-role.kubernetes.io/control-plane"
DefaultDPDKVersion = "19.11"
DefaultDNSAddress = "114.114.114.114"
Docker = "docker"
Containerd = "containerd"
Crio = "crio"
Isula = "isula"
)
func (cfg *ClusterSpec) SetDefaultClusterSpec(incluster bool) (*ClusterSpec, *HostGroups, error) {
clusterCfg := ClusterSpec{}
clusterCfg.Hosts = SetDefaultHostsCfg(cfg)
clusterCfg.RoleGroups = cfg.RoleGroups
hostGroups, err := clusterCfg.GroupHosts()
if err != nil {
return nil, nil, err
}
clusterCfg.ControlPlaneEndpoint = SetDefaultLBCfg(cfg, hostGroups.Master, incluster)
clusterCfg.Network = SetDefaultNetworkCfg(cfg)
clusterCfg.Kubernetes = SetDefaultClusterCfg(cfg)
clusterCfg.Registry = cfg.Registry
clusterCfg.Addons = cfg.Addons
clusterCfg.KubeSphere = cfg.KubeSphere
if cfg.Kubernetes.ClusterName == "" {
clusterCfg.Kubernetes.ClusterName = DefaultClusterName
}
if cfg.Kubernetes.Version == "" {
clusterCfg.Kubernetes.Version = DefaultKubeVersion
}
if cfg.Kubernetes.MaxPods == 0 {
clusterCfg.Kubernetes.MaxPods = DefaultMaxPods
}
if cfg.Kubernetes.NodeCidrMaskSize == 0 {
clusterCfg.Kubernetes.NodeCidrMaskSize = DefaultNodeCidrMaskSize
}
if cfg.Kubernetes.ProxyMode == "" {
clusterCfg.Kubernetes.ProxyMode = DefaultProxyMode
}
return &clusterCfg, hostGroups, nil
}
func SetDefaultHostsCfg(cfg *ClusterSpec) []HostCfg {
var hostscfg []HostCfg
if len(cfg.Hosts) == 0 {
return nil
}
for _, host := range cfg.Hosts {
if len(host.Address) == 0 && len(host.InternalAddress) > 0 {
host.Address = host.InternalAddress
}
if len(host.InternalAddress) == 0 && len(host.Address) > 0 {
host.InternalAddress = host.Address
}
if host.User == "" {
host.User = "root"
}
if host.Port == 0 {
host.Port = DefaultSSHPort
}
if host.PrivateKey == "" {
if host.Password == "" && host.PrivateKeyPath == "" {
host.PrivateKeyPath = "~/.ssh/id_rsa"
}
if host.PrivateKeyPath != "" && strings.HasPrefix(strings.TrimSpace(host.PrivateKeyPath), "~/") {
homeDir, _ := util.Home()
host.PrivateKeyPath = strings.Replace(host.PrivateKeyPath, "~/", fmt.Sprintf("%s/", homeDir), 1)
}
}
if host.Arch == "" {
host.Arch = DefaultArch
}
hostscfg = append(hostscfg, host)
}
return hostscfg
}
func SetDefaultLBCfg(cfg *ClusterSpec, masterGroup []HostCfg, incluster bool) ControlPlaneEndpoint {
if !incluster {
//The detection is not an HA environment, and the address at LB does not need input
if len(masterGroup) == 1 && cfg.ControlPlaneEndpoint.Address != "" {
fmt.Println("When the environment is not HA, the LB address does not need to be entered, so delete the corresponding value.")
os.Exit(0)
}
//Check whether LB should be configured
if len(masterGroup) >= 3 && !cfg.ControlPlaneEndpoint.IsInternalLBEnabled() && cfg.ControlPlaneEndpoint.Address == "" {
fmt.Println("When the environment has at least three masters, You must set the value of the LB address or enable the internal loadbalancer.")
os.Exit(0)
}
// Check whether LB address and the internal LB are both enabled
if cfg.ControlPlaneEndpoint.IsInternalLBEnabled() && cfg.ControlPlaneEndpoint.Address != "" {
fmt.Println("You cannot set up the internal load balancer and the LB address at the same time.")
os.Exit(0)
}
}
if cfg.ControlPlaneEndpoint.Address == "" || cfg.ControlPlaneEndpoint.Address == "127.0.0.1" {
cfg.ControlPlaneEndpoint.Address = masterGroup[0].InternalAddress
}
if cfg.ControlPlaneEndpoint.Domain == "" {
cfg.ControlPlaneEndpoint.Domain = DefaultLBDomain
}
if cfg.ControlPlaneEndpoint.Port == 0 {
cfg.ControlPlaneEndpoint.Port = DefaultLBPort
}
defaultLbCfg := cfg.ControlPlaneEndpoint
return defaultLbCfg
}
func SetDefaultNetworkCfg(cfg *ClusterSpec) NetworkConfig {
if cfg.Network.Plugin == "" {
cfg.Network.Plugin = DefaultNetworkPlugin
}
if cfg.Network.KubePodsCIDR == "" {
cfg.Network.KubePodsCIDR = DefaultPodsCIDR
}
if cfg.Network.KubeServiceCIDR == "" {
cfg.Network.KubeServiceCIDR = DefaultServiceCIDR
}
if cfg.Network.Calico.IPIPMode == "" {
cfg.Network.Calico.IPIPMode = DefaultIPIPMode
}
if cfg.Network.Calico.VXLANMode == "" {
cfg.Network.Calico.VXLANMode = DefaultVXLANMode
}
if cfg.Network.Calico.VethMTU == 0 {
cfg.Network.Calico.VethMTU = DefaultVethMTU
}
if cfg.Network.Flannel.BackendMode == "" {
cfg.Network.Flannel.BackendMode = DefaultBackendMode
}
// kube-ovn default config
if cfg.Network.Kubeovn.JoinCIDR == "" {
cfg.Network.Kubeovn.JoinCIDR = DefaultJoinCIDR
}
if cfg.Network.Kubeovn.Label == "" {
cfg.Network.Kubeovn.Label = DefaultOvnLabel
}
if cfg.Network.Kubeovn.VlanID == "" {
cfg.Network.Kubeovn.VlanID = DefaultVlanID
}
if cfg.Network.Kubeovn.NetworkType == "" {
cfg.Network.Kubeovn.NetworkType = DefaultNetworkType
}
if cfg.Network.Kubeovn.PingerExternalAddress == "" {
cfg.Network.Kubeovn.PingerExternalAddress = DefaultDNSAddress
}
if cfg.Network.Kubeovn.DpdkVersion == "" {
cfg.Network.Kubeovn.DpdkVersion = DefaultDPDKVersion
}
defaultNetworkCfg := cfg.Network
return defaultNetworkCfg
}
func SetDefaultClusterCfg(cfg *ClusterSpec) Kubernetes {
if cfg.Kubernetes.Version == "" {
cfg.Kubernetes.Version = DefaultKubeVersion
} else {
s := strings.Split(cfg.Kubernetes.Version, "-")
if len(s) > 1 {
cfg.Kubernetes.Version = s[0]
cfg.Kubernetes.Type = s[1]
}
}
if cfg.Kubernetes.ClusterName == "" {
cfg.Kubernetes.ClusterName = DefaultClusterName
}
if cfg.Kubernetes.EtcdBackupDir == "" {
cfg.Kubernetes.EtcdBackupDir = DefaultEtcdBackupDir
}
if cfg.Kubernetes.EtcdBackupPeriod == 0 {
cfg.Kubernetes.EtcdBackupPeriod = DefaultEtcdBackupPeriod
}
if cfg.Kubernetes.KeepBackupNumber == 0 {
cfg.Kubernetes.KeepBackupNumber = DefaultKeepBackNumber
}
if cfg.Kubernetes.EtcdBackupScriptDir == "" {
cfg.Kubernetes.EtcdBackupScriptDir = DefaultEtcdBackupScriptDir
}
if cfg.Kubernetes.ContainerManager == "" {
cfg.Kubernetes.ContainerManager = Docker
}
if cfg.Kubernetes.ContainerRuntimeEndpoint == "" {
switch cfg.Kubernetes.ContainerManager {
case Docker:
cfg.Kubernetes.ContainerRuntimeEndpoint = ""
case Crio:
cfg.Kubernetes.ContainerRuntimeEndpoint = DefaultCrioEndpoint
case Containerd:
cfg.Kubernetes.ContainerRuntimeEndpoint = DefaultContainerdEndpoint
case Isula:
cfg.Kubernetes.ContainerRuntimeEndpoint = DefaultIsulaEndpoint
default:
cfg.Kubernetes.ContainerRuntimeEndpoint = ""
}
}
defaultClusterCfg := cfg.Kubernetes
return defaultClusterCfg
}

View File

@@ -0,0 +1,43 @@
/*
Copyright 2021.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package v1alpha1 contains API Schema definitions for the kubekey v1alpha1 API group
// +kubebuilder:object:generate=true
// +groupName=kubekey.kubesphere.io
package v1alpha1
import (
"k8s.io/apimachinery/pkg/runtime/schema"
"sigs.k8s.io/controller-runtime/pkg/scheme"
)
// SchemeGroupVersion is group version used to register these objects
var SchemeGroupVersion = GroupVersion
var (
// GroupVersion is group version used to register these objects
GroupVersion = schema.GroupVersion{Group: "kubekey.kubesphere.io", Version: "v1alpha1"}
// SchemeBuilder is used to add go types to the GroupVersionKind scheme
SchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion}
// AddToScheme adds the types in this group-version to the given scheme.
AddToScheme = SchemeBuilder.AddToScheme
)
func Resource(resource string) schema.GroupResource {
return GroupVersion.WithResource(resource).GroupResource()
}

View File

@@ -0,0 +1,53 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import "k8s.io/apimachinery/pkg/runtime"
type Kubernetes struct {
Type string `yaml:"type" json:"type,omitempty"`
Version string `yaml:"version" json:"version,omitempty"`
ClusterName string `yaml:"clusterName" json:"clusterName,omitempty"`
MasqueradeAll bool `yaml:"masqueradeAll" json:"masqueradeAll,omitempty"`
MaxPods int `yaml:"maxPods" json:"maxPods,omitempty"`
NodeCidrMaskSize int `yaml:"nodeCidrMaskSize" json:"nodeCidrMaskSize,omitempty"`
ApiserverCertExtraSans []string `yaml:"apiserverCertExtraSans" json:"apiserverCertExtraSans,omitempty"`
ProxyMode string `yaml:"proxyMode" json:"proxyMode,omitempty"`
// +optional
Nodelocaldns *bool `yaml:"nodelocaldns" json:"nodelocaldns,omitempty"`
EtcdBackupDir string `yaml:"etcdBackupDir" json:"etcdBackupDir,omitempty"`
EtcdBackupPeriod int `yaml:"etcdBackupPeriod" json:"etcdBackupPeriod,omitempty"`
KeepBackupNumber int `yaml:"keepBackupNumber" json:"keepBackupNumber,omitempty"`
EtcdBackupScriptDir string `yaml:"etcdBackupScript" json:"etcdBackupScript,omitempty"`
ContainerManager string `yaml:"containerManager" json:"containerManager,omitempty"`
ContainerRuntimeEndpoint string `yaml:"containerRuntimeEndpoint" json:"containerRuntimeEndpoint,omitempty"`
ApiServerArgs []string `yaml:"apiserverArgs" json:"apiserverArgs,omitempty"`
ControllerManagerArgs []string `yaml:"controllerManagerArgs" json:"controllerManagerArgs,omitempty"`
SchedulerArgs []string `yaml:"schedulerArgs" json:"schedulerArgs,omitempty"`
KubeletArgs []string `yaml:"kubeletArgs" json:"kubeletArgs,omitempty"`
KubeProxyArgs []string `yaml:"kubeProxyArgs" json:"kubeProxyArgs,omitempty"`
KubeletConfiguration runtime.RawExtension `yaml:"kubeletConfiguration" json:"kubeletConfiguration,omitempty"`
KubeProxyConfiguration runtime.RawExtension `yaml:"kubeProxyConfiguration" json:"kubeProxyConfiguration,omitempty"`
}
// EnableNodelocaldns is used to determine whether to deploy nodelocaldns.
func (k *Kubernetes) EnableNodelocaldns() bool {
if k.Nodelocaldns == nil {
return true
}
return *k.Nodelocaldns
}

View File

@@ -0,0 +1,53 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
type NetworkConfig struct {
Plugin string `yaml:"plugin" json:"plugin,omitempty"`
KubePodsCIDR string `yaml:"kubePodsCIDR" json:"kubePodsCIDR,omitempty"`
KubeServiceCIDR string `yaml:"kubeServiceCIDR" json:"kubeServiceCIDR,omitempty"`
Calico CalicoCfg `yaml:"calico" json:"calico,omitempty"`
Flannel FlannelCfg `yaml:"flannel" json:"flannel,omitempty"`
Kubeovn KubeovnCfg `yaml:"kubeovn" json:"kubeovn,omitempty"`
}
type CalicoCfg struct {
IPIPMode string `yaml:"ipipMode" json:"ipipMode,omitempty"`
VXLANMode string `yaml:"vxlanMode" json:"vxlanMode,omitempty"`
VethMTU int `yaml:"vethMTU" json:"vethMTU,omitempty"`
}
type FlannelCfg struct {
BackendMode string `yaml:"backendMode" json:"backendMode,omitempty"`
Directrouting bool `yaml:"directRouting" json:"directRouting,omitempty"`
}
type KubeovnCfg struct {
JoinCIDR string `yaml:"joinCIDR" json:"joinCIDR,omitempty"`
NetworkType string `yaml:"networkType" json:"networkType,omitempty"`
Label string `yaml:"label" json:"label,omitempty"`
Iface string `yaml:"iface" json:"iface,omitempty"`
VlanInterfaceName string `yaml:"vlanInterfaceName" json:"vlanInterfaceName,omitempty"`
VlanID string `yaml:"vlanID" json:"vlanID,omitempty"`
DpdkMode bool `yaml:"dpdkMode" json:"dpdkMode,omitempty"`
EnableSSL bool `yaml:"enableSSL" json:"enableSSL,omitempty"`
EnableMirror bool `yaml:"enableMirror" json:"enableMirror,omitempty"`
HwOffload bool `yaml:"hwOffload" json:"hwOffload,omitempty"`
DpdkVersion string `yaml:"dpdkVersion" json:"dpdkVersion,omitempty"`
PingerExternalAddress string `yaml:"pingerExternalAddress" json:"pingerExternalAddress,omitempty"`
PingerExternalDomain string `yaml:"pingerExternalDomain" json:"pingerExternalDomain,omitempty"`
}

View File

@@ -0,0 +1,611 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by controller-gen. DO NOT EDIT.
package v1alpha1
import (
"k8s.io/apimachinery/pkg/runtime"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Addon) DeepCopyInto(out *Addon) {
*out = *in
in.Sources.DeepCopyInto(&out.Sources)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Addon.
func (in *Addon) DeepCopy() *Addon {
if in == nil {
return nil
}
out := new(Addon)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CalicoCfg) DeepCopyInto(out *CalicoCfg) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CalicoCfg.
func (in *CalicoCfg) DeepCopy() *CalicoCfg {
if in == nil {
return nil
}
out := new(CalicoCfg)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Chart) DeepCopyInto(out *Chart) {
*out = *in
if in.Values != nil {
in, out := &in.Values, &out.Values
*out = make([]string, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Chart.
func (in *Chart) DeepCopy() *Chart {
if in == nil {
return nil
}
out := new(Chart)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Cluster) DeepCopyInto(out *Cluster) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Cluster.
func (in *Cluster) DeepCopy() *Cluster {
if in == nil {
return nil
}
out := new(Cluster)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *Cluster) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ClusterList) DeepCopyInto(out *ClusterList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]Cluster, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ClusterList.
func (in *ClusterList) DeepCopy() *ClusterList {
if in == nil {
return nil
}
out := new(ClusterList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *ClusterList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ClusterSpec) DeepCopyInto(out *ClusterSpec) {
*out = *in
if in.Hosts != nil {
in, out := &in.Hosts, &out.Hosts
*out = make([]HostCfg, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
in.RoleGroups.DeepCopyInto(&out.RoleGroups)
out.ControlPlaneEndpoint = in.ControlPlaneEndpoint
in.Kubernetes.DeepCopyInto(&out.Kubernetes)
out.Network = in.Network
in.Registry.DeepCopyInto(&out.Registry)
if in.Addons != nil {
in, out := &in.Addons, &out.Addons
*out = make([]Addon, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
out.KubeSphere = in.KubeSphere
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ClusterSpec.
func (in *ClusterSpec) DeepCopy() *ClusterSpec {
if in == nil {
return nil
}
out := new(ClusterSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ClusterStatus) DeepCopyInto(out *ClusterStatus) {
*out = *in
in.JobInfo.DeepCopyInto(&out.JobInfo)
if in.Nodes != nil {
in, out := &in.Nodes, &out.Nodes
*out = make([]NodeStatus, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.Conditions != nil {
in, out := &in.Conditions, &out.Conditions
*out = make([]Condition, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ClusterStatus.
func (in *ClusterStatus) DeepCopy() *ClusterStatus {
if in == nil {
return nil
}
out := new(ClusterStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Condition) DeepCopyInto(out *Condition) {
*out = *in
in.StartTime.DeepCopyInto(&out.StartTime)
in.EndTime.DeepCopyInto(&out.EndTime)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Condition.
func (in *Condition) DeepCopy() *Condition {
if in == nil {
return nil
}
out := new(Condition)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ContainerInfo) DeepCopyInto(out *ContainerInfo) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ContainerInfo.
func (in *ContainerInfo) DeepCopy() *ContainerInfo {
if in == nil {
return nil
}
out := new(ContainerInfo)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ControlPlaneEndpoint) DeepCopyInto(out *ControlPlaneEndpoint) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ControlPlaneEndpoint.
func (in *ControlPlaneEndpoint) DeepCopy() *ControlPlaneEndpoint {
if in == nil {
return nil
}
out := new(ControlPlaneEndpoint)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ExternalEtcd) DeepCopyInto(out *ExternalEtcd) {
*out = *in
if in.Endpoints != nil {
in, out := &in.Endpoints, &out.Endpoints
*out = make([]string, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExternalEtcd.
func (in *ExternalEtcd) DeepCopy() *ExternalEtcd {
if in == nil {
return nil
}
out := new(ExternalEtcd)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *FlannelCfg) DeepCopyInto(out *FlannelCfg) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new FlannelCfg.
func (in *FlannelCfg) DeepCopy() *FlannelCfg {
if in == nil {
return nil
}
out := new(FlannelCfg)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *HostCfg) DeepCopyInto(out *HostCfg) {
*out = *in
if in.Labels != nil {
in, out := &in.Labels, &out.Labels
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HostCfg.
func (in *HostCfg) DeepCopy() *HostCfg {
if in == nil {
return nil
}
out := new(HostCfg)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *HostGroups) DeepCopyInto(out *HostGroups) {
*out = *in
if in.All != nil {
in, out := &in.All, &out.All
*out = make([]HostCfg, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.Etcd != nil {
in, out := &in.Etcd, &out.Etcd
*out = make([]HostCfg, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.Master != nil {
in, out := &in.Master, &out.Master
*out = make([]HostCfg, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.Worker != nil {
in, out := &in.Worker, &out.Worker
*out = make([]HostCfg, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.K8s != nil {
in, out := &in.K8s, &out.K8s
*out = make([]HostCfg, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HostGroups.
func (in *HostGroups) DeepCopy() *HostGroups {
if in == nil {
return nil
}
out := new(HostGroups)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *JobInfo) DeepCopyInto(out *JobInfo) {
*out = *in
if in.Pods != nil {
in, out := &in.Pods, &out.Pods
*out = make([]PodInfo, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new JobInfo.
func (in *JobInfo) DeepCopy() *JobInfo {
if in == nil {
return nil
}
out := new(JobInfo)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *KubeSphere) DeepCopyInto(out *KubeSphere) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KubeSphere.
func (in *KubeSphere) DeepCopy() *KubeSphere {
if in == nil {
return nil
}
out := new(KubeSphere)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *KubeovnCfg) DeepCopyInto(out *KubeovnCfg) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KubeovnCfg.
func (in *KubeovnCfg) DeepCopy() *KubeovnCfg {
if in == nil {
return nil
}
out := new(KubeovnCfg)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Kubernetes) DeepCopyInto(out *Kubernetes) {
*out = *in
if in.ApiserverCertExtraSans != nil {
in, out := &in.ApiserverCertExtraSans, &out.ApiserverCertExtraSans
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.Nodelocaldns != nil {
in, out := &in.Nodelocaldns, &out.Nodelocaldns
*out = new(bool)
**out = **in
}
if in.ApiServerArgs != nil {
in, out := &in.ApiServerArgs, &out.ApiServerArgs
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.ControllerManagerArgs != nil {
in, out := &in.ControllerManagerArgs, &out.ControllerManagerArgs
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.SchedulerArgs != nil {
in, out := &in.SchedulerArgs, &out.SchedulerArgs
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.KubeletArgs != nil {
in, out := &in.KubeletArgs, &out.KubeletArgs
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.KubeProxyArgs != nil {
in, out := &in.KubeProxyArgs, &out.KubeProxyArgs
*out = make([]string, len(*in))
copy(*out, *in)
}
in.KubeletConfiguration.DeepCopyInto(&out.KubeletConfiguration)
in.KubeProxyConfiguration.DeepCopyInto(&out.KubeProxyConfiguration)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Kubernetes.
func (in *Kubernetes) DeepCopy() *Kubernetes {
if in == nil {
return nil
}
out := new(Kubernetes)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *NetworkConfig) DeepCopyInto(out *NetworkConfig) {
*out = *in
out.Calico = in.Calico
out.Flannel = in.Flannel
out.Kubeovn = in.Kubeovn
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NetworkConfig.
func (in *NetworkConfig) DeepCopy() *NetworkConfig {
if in == nil {
return nil
}
out := new(NetworkConfig)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *NodeStatus) DeepCopyInto(out *NodeStatus) {
*out = *in
if in.Roles != nil {
in, out := &in.Roles, &out.Roles
*out = make(map[string]bool, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NodeStatus.
func (in *NodeStatus) DeepCopy() *NodeStatus {
if in == nil {
return nil
}
out := new(NodeStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PodInfo) DeepCopyInto(out *PodInfo) {
*out = *in
if in.Containers != nil {
in, out := &in.Containers, &out.Containers
*out = make([]ContainerInfo, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodInfo.
func (in *PodInfo) DeepCopy() *PodInfo {
if in == nil {
return nil
}
out := new(PodInfo)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RegistryConfig) DeepCopyInto(out *RegistryConfig) {
*out = *in
if in.RegistryMirrors != nil {
in, out := &in.RegistryMirrors, &out.RegistryMirrors
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.InsecureRegistries != nil {
in, out := &in.InsecureRegistries, &out.InsecureRegistries
*out = make([]string, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RegistryConfig.
func (in *RegistryConfig) DeepCopy() *RegistryConfig {
if in == nil {
return nil
}
out := new(RegistryConfig)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RoleGroups) DeepCopyInto(out *RoleGroups) {
*out = *in
if in.Etcd != nil {
in, out := &in.Etcd, &out.Etcd
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.Master != nil {
in, out := &in.Master, &out.Master
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.Worker != nil {
in, out := &in.Worker, &out.Worker
*out = make([]string, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RoleGroups.
func (in *RoleGroups) DeepCopy() *RoleGroups {
if in == nil {
return nil
}
out := new(RoleGroups)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Sources) DeepCopyInto(out *Sources) {
*out = *in
in.Chart.DeepCopyInto(&out.Chart)
in.Yaml.DeepCopyInto(&out.Yaml)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Sources.
func (in *Sources) DeepCopy() *Sources {
if in == nil {
return nil
}
out := new(Sources)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Yaml) DeepCopyInto(out *Yaml) {
*out = *in
if in.Path != nil {
in, out := &in.Path, &out.Path
*out = make([]string, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Yaml.
func (in *Yaml) DeepCopy() *Yaml {
if in == nil {
return nil
}
out := new(Yaml)
in.DeepCopyInto(out)
return out
}

View File

@@ -0,0 +1,43 @@
/*
Copyright 2021 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha2
type Addon struct {
Name string `yaml:"name" json:"name,omitempty"`
Namespace string `yaml:"namespace" json:"namespace,omitempty"`
Sources Sources `yaml:"sources" json:"sources,omitempty"`
Retries int `yaml:"retries" json:"retries,omitempty"`
Delay int `yaml:"delay" json:"delay,omitempty"`
}
type Sources struct {
Chart Chart `yaml:"chart" json:"chart,omitempty"`
Yaml Yaml `yaml:"yaml" json:"yaml,omitempty"`
}
type Chart struct {
Name string `yaml:"name" json:"name,omitempty"`
Repo string `yaml:"repo" json:"repo,omitempty"`
Path string `yaml:"path" json:"path,omitempty"`
Version string `yaml:"version" json:"version,omitempty"`
ValuesFile string `yaml:"valuesFile" json:"valuesFile,omitempty"`
Values []string `yaml:"values" json:"values,omitempty"`
}
type Yaml struct {
Path []string `yaml:"path" json:"path,omitempty"`
}

View File

@@ -0,0 +1,364 @@
/*
Copyright 2021 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha2
import (
"fmt"
"regexp"
"strconv"
"strings"
"bytetrade.io/web3os/installer/pkg/core/connector"
"bytetrade.io/web3os/installer/pkg/core/logger"
"bytetrade.io/web3os/installer/pkg/core/util"
"github.com/pkg/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
)
// EDIT THIS FILE! THIS IS SCAFFOLDING FOR YOU TO OWN!
// NOTE: json tags are required. Any new fields you add must have json tags for the fields to be serialized.
// ClusterSpec defines the desired state of Cluster
type ClusterSpec struct {
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
// Important: Run "make" to regenerate code after modifying this file
// Foo is an example field of Cluster. Edit Cluster_types.go to remove/update
Hosts []HostCfg `yaml:"hosts" json:"hosts,omitempty"`
RoleGroups map[string][]string `yaml:"roleGroups" json:"roleGroups,omitempty"`
ControlPlaneEndpoint ControlPlaneEndpoint `yaml:"controlPlaneEndpoint" json:"controlPlaneEndpoint,omitempty"`
System System `yaml:"system" json:"system,omitempty"`
Etcd EtcdCluster `yaml:"etcd" json:"etcd,omitempty"`
Kubernetes Kubernetes `yaml:"kubernetes" json:"kubernetes,omitempty"`
Network NetworkConfig `yaml:"network" json:"network,omitempty"`
Registry RegistryConfig `yaml:"registry" json:"registry,omitempty"`
Addons []Addon `yaml:"addons" json:"addons,omitempty"`
KubeSphere KubeSphere `json:"kubesphere,omitempty"`
}
// ClusterStatus defines the observed state of Cluster
type ClusterStatus struct {
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
// Important: Run "make" to regenerate code after modifying this file
JobInfo JobInfo `json:"jobInfo,omitempty"`
PiplineInfo PiplineInfo `json:"piplineInfo,omitempty"`
Version string `json:"version,omitempty"`
NetworkPlugin string `json:"networkPlugin,omitempty"`
NodesCount int `json:"nodesCount,omitempty"`
EtcdCount int `json:"etcdCount,omitempty"`
MasterCount int `json:"masterCount,omitempty"`
WorkerCount int `json:"workerCount,omitempty"`
Nodes []NodeStatus `json:"nodes,omitempty"`
Conditions []Condition `json:"Conditions,omitempty"`
}
// JobInfo defines the job information to be used to create a cluster or add a node.
type JobInfo struct {
Namespace string `json:"namespace,omitempty"`
Name string `json:"name,omitempty"`
Pods []PodInfo `json:"pods,omitempty"`
}
// PodInfo defines the pod information to be used to create a cluster or add a node.
type PodInfo struct {
Name string `json:"name,omitempty"`
Containers []ContainerInfo `json:"containers,omitempty"`
}
// ContainerInfo defines the container information to be used to create a cluster or add a node.
type ContainerInfo struct {
Name string `json:"name,omitempty"`
}
// PiplineInfo define the pipline information for operating cluster.
type PiplineInfo struct {
// Running or Terminated
Status string `json:"status,omitempty"`
}
// NodeStatus defines the status information of the nodes in the cluster.
type NodeStatus struct {
InternalIP string `json:"internalIP,omitempty"`
Hostname string `json:"hostname,omitempty"`
Roles map[string]bool `json:"roles,omitempty"`
}
// Condition defines the process information.
type Condition struct {
Step string `json:"step,omitempty"`
StartTime metav1.Time `json:"startTime,omitempty"`
EndTime metav1.Time `json:"endTime,omitempty"`
Status bool `json:"status,omitempty"`
Events map[string]Event `json:"event,omitempty"`
}
// +genclient
// +genclient:nonNamespaced
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +kubebuilder:object:root=true
// +kubebuilder:storageversion
// +kubebuilder:subresource:status
// Cluster is the Schema for the clusters API
// +kubebuilder:resource:path=clusters,scope=Cluster
type Cluster struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ClusterSpec `json:"spec,omitempty"`
Status ClusterStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
// ClusterList contains a list of Cluster
type ClusterList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []Cluster `json:"items"`
}
func init() {
SchemeBuilder.Register(&Cluster{}, &ClusterList{})
}
// HostCfg defines host information for cluster.
type HostCfg struct {
Name string `yaml:"name,omitempty" json:"name,omitempty"`
Address string `yaml:"address,omitempty" json:"address,omitempty"`
InternalAddress string `yaml:"internalAddress,omitempty" json:"internalAddress,omitempty"`
Port int `yaml:"port,omitempty" json:"port,omitempty"`
User string `yaml:"user,omitempty" json:"user,omitempty"`
Password string `yaml:"password,omitempty" json:"password,omitempty"`
PrivateKey string `yaml:"privateKey,omitempty" json:"privateKey,omitempty"`
PrivateKeyPath string `yaml:"privateKeyPath,omitempty" json:"privateKeyPath,omitempty"`
Arch string `yaml:"arch,omitempty" json:"arch,omitempty"`
Timeout *int64 `yaml:"timeout,omitempty" json:"timeout,omitempty"`
// Labels defines the kubernetes labels for the node.
Labels map[string]string `yaml:"labels,omitempty" json:"labels,omitempty"`
}
// ControlPlaneEndpoint defines the control plane endpoint information for cluster.
type ControlPlaneEndpoint struct {
InternalLoadbalancer string `yaml:"internalLoadbalancer" json:"internalLoadbalancer,omitempty"`
Domain string `yaml:"domain" json:"domain,omitempty"`
Address string `yaml:"address" json:"address,omitempty"`
Port int `yaml:"port" json:"port,omitempty"`
KubeVip KubeVip `yaml:"kubevip" json:"kubevip,omitempty"`
}
type KubeVip struct {
Mode string `yaml:"mode" json:"mode,omitempty"`
}
// System defines the system config for each node in cluster.
type System struct {
NtpServers []string `yaml:"ntpServers" json:"ntpServers,omitempty"`
Timezone string `yaml:"timezone" json:"timezone,omitempty"`
Rpms []string `yaml:"rpms" json:"rpms,omitempty"`
Debs []string `yaml:"debs" json:"debs,omitempty"`
}
// RegistryConfig defines the configuration information of the image's repository.
type RegistryConfig struct {
Type string `yaml:"type" json:"type,omitempty"`
RegistryMirrors []string `yaml:"registryMirrors" json:"registryMirrors,omitempty"`
InsecureRegistries []string `yaml:"insecureRegistries" json:"insecureRegistries,omitempty"`
PrivateRegistry string `yaml:"privateRegistry" json:"privateRegistry,omitempty"`
DataRoot string `yaml:"dataRoot" json:"dataRoot,omitempty"`
NamespaceOverride string `yaml:"namespaceOverride" json:"namespaceOverride,omitempty"`
Auths runtime.RawExtension `yaml:"auths" json:"auths,omitempty"`
}
// KubeSphere defines the configuration information of the KubeSphere.
type KubeSphere struct {
Enabled bool `json:"enabled,omitempty"`
Version string `json:"version,omitempty"`
Configurations string `json:"configurations,omitempty"`
}
// GenerateCertSANs is used to generate cert sans for cluster.
func (cfg *ClusterSpec) GenerateCertSANs() []string {
clusterSvc := fmt.Sprintf("kubernetes.default.svc.%s", cfg.Kubernetes.DNSDomain)
defaultCertSANs := []string{"kubernetes", "kubernetes.default", "kubernetes.default.svc", clusterSvc, "localhost", "127.0.0.1"}
extraCertSANs := make([]string, 0)
extraCertSANs = append(extraCertSANs, cfg.ControlPlaneEndpoint.Domain)
extraCertSANs = append(extraCertSANs, cfg.ControlPlaneEndpoint.Address)
for _, host := range cfg.Hosts {
extraCertSANs = append(extraCertSANs, host.Name)
extraCertSANs = append(extraCertSANs, fmt.Sprintf("%s.%s", host.Name, cfg.Kubernetes.DNSDomain))
if host.Address != cfg.ControlPlaneEndpoint.Address {
extraCertSANs = append(extraCertSANs, host.Address)
}
if host.InternalAddress != host.Address && host.InternalAddress != cfg.ControlPlaneEndpoint.Address {
extraCertSANs = append(extraCertSANs, host.InternalAddress)
}
}
extraCertSANs = append(extraCertSANs, util.ParseIp(cfg.Network.KubeServiceCIDR)[0])
defaultCertSANs = append(defaultCertSANs, extraCertSANs...)
if cfg.Kubernetes.ApiserverCertExtraSans != nil {
defaultCertSANs = append(defaultCertSANs, cfg.Kubernetes.ApiserverCertExtraSans...)
}
return defaultCertSANs
}
// GroupHosts is used to group hosts according to the configuration file.s
func (cfg *ClusterSpec) GroupHosts() map[string][]*KubeHost {
hostMap := make(map[string]*KubeHost)
for _, hostCfg := range cfg.Hosts {
host := toHosts(hostCfg)
hostMap[host.Name] = host
}
roleGroups := cfg.ParseRolesList(hostMap)
//Check that the parameters under roleGroups are incorrect
if len(roleGroups[Master]) == 0 && len(roleGroups[ControlPlane]) == 0 {
logger.Fatal(errors.New("The number of master/control-plane cannot be 0"))
}
if len(roleGroups[Etcd]) == 0 && cfg.Etcd.Type == KubeKey {
logger.Fatal(errors.New("The number of etcd cannot be 0"))
}
if len(roleGroups[Registry]) > 1 {
logger.Fatal(errors.New("The number of registry node cannot be greater than 1."))
}
for _, host := range roleGroups[ControlPlane] {
host.SetRole(Master)
roleGroups[Master] = append(roleGroups[Master], host)
}
return roleGroups
}
// +kubebuilder:object:generate=false
type KubeHost struct {
*connector.BaseHost
Labels map[string]string
}
func toHosts(cfg HostCfg) *KubeHost {
host := connector.NewHost()
host.Name = cfg.Name
host.Address = cfg.Address
host.InternalAddress = cfg.InternalAddress
host.Port = cfg.Port
host.User = cfg.User
host.Password = cfg.Password
host.PrivateKey = cfg.PrivateKey
host.PrivateKeyPath = cfg.PrivateKeyPath
host.Arch = cfg.Arch
host.Timeout = *cfg.Timeout
kubeHost := &KubeHost{
BaseHost: host,
Labels: cfg.Labels,
}
return kubeHost
}
// CorednsClusterIP is used to get the coredns service address inside the cluster.
func (cfg *ClusterSpec) CorednsClusterIP() string {
return util.ParseIp(cfg.Network.KubeServiceCIDR)[2]
}
// ClusterDNS is used to get the dns server address inside the cluster.
func (cfg *ClusterSpec) ClusterDNS() string {
if cfg.Kubernetes.EnableNodelocaldns() {
return "169.254.25.10"
} else {
return cfg.CorednsClusterIP()
}
}
// ParseRolesList is used to parse the host grouping list.
func (cfg *ClusterSpec) ParseRolesList(hostMap map[string]*KubeHost) map[string][]*KubeHost {
roleGroupLists := make(map[string][]*KubeHost)
for role, hosts := range cfg.RoleGroups {
roleGroup := make([]string, 0)
for _, host := range hosts {
h := make([]string, 0)
if strings.Contains(host, "[") && strings.Contains(host, "]") && strings.Contains(host, ":") {
rangeHosts := getHostsRange(host, hostMap, role)
h = append(h, rangeHosts...)
} else {
if err := hostVerify(hostMap, host, role); err != nil {
logger.Fatal(err)
}
h = append(h, host)
}
roleGroup = append(roleGroup, h...)
for _, hostName := range h {
if h, ok := hostMap[hostName]; ok {
roleGroupAppend(roleGroupLists, role, h)
} else {
logger.Fatal(fmt.Errorf("incorrect nodeName under roleGroups/%s in the configuration file", role))
}
}
}
}
return roleGroupLists
}
func roleGroupAppend(roleGroupLists map[string][]*KubeHost, role string, host *KubeHost) {
host.SetRole(role)
r := roleGroupLists[role]
r = append(r, host)
roleGroupLists[role] = r
}
func getHostsRange(rangeStr string, hostMap map[string]*KubeHost, group string) []string {
hostRangeList := make([]string, 0)
r := regexp.MustCompile(`\[(\d+)\:(\d+)\]`)
nameSuffix := r.FindStringSubmatch(rangeStr)
namePrefix := strings.Split(rangeStr, nameSuffix[0])[0]
nameSuffixStart, _ := strconv.Atoi(nameSuffix[1])
nameSuffixEnd, _ := strconv.Atoi(nameSuffix[2])
for i := nameSuffixStart; i <= nameSuffixEnd; i++ {
if err := hostVerify(hostMap, fmt.Sprintf("%s%d", namePrefix, i), group); err != nil {
logger.Fatal(err)
}
hostRangeList = append(hostRangeList, fmt.Sprintf("%s%d", namePrefix, i))
}
return hostRangeList
}
func hostVerify(hostMap map[string]*KubeHost, hostName string, group string) error {
if _, ok := hostMap[hostName]; !ok {
return fmt.Errorf("[%s] is in [%s] group, but not in hosts list", hostName, group)
}
return nil
}
func (c ControlPlaneEndpoint) IsInternalLBEnabled() bool {
return c.InternalLoadbalancer == Haproxy
}
func (c ControlPlaneEndpoint) IsInternalLBEnabledVip() bool {
return c.InternalLoadbalancer == Kubevip
}

View File

@@ -0,0 +1,373 @@
/*
Copyright 2021 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha2
import (
"fmt"
"os"
"strings"
"bytetrade.io/web3os/installer/pkg/core/util"
)
const (
DefaultPreDir = "kubekey"
DefaultTmpDir = "/tmp/kubekey"
DefaultSSHPort = 22
DefaultLBPort = 6443
DefaultApiserverPort = 6443
DefaultLBDomain = "lb.kubesphere.local"
DefaultNetworkPlugin = "calico"
DefaultPodsCIDR = "10.233.64.0/18"
DefaultServiceCIDR = "10.233.0.0/18"
DefaultKubeImageNamespace = "kubesphere"
DefaultClusterName = "cluster.local"
DefaultDNSDomain = "cluster.local"
DefaultArch = "amd64"
DefaultSSHTimeout = 30
DefaultEtcdVersion = "v3.4.13"
DefaultEtcdPort = "2379"
DefaultDockerVersion = "20.10.8"
DefaultContainerdVersion = "1.6.4"
DefaultRuncVersion = "v1.1.1"
DefaultRuncVersion_v_1_1_4 = "v1.1.4"
DefaultCrictlVersion = "v1.24.0"
DefaultKubeVersion = "v1.23.10"
DefaultCalicoVersion = "v3.29.2"
DefaultFlannelVersion = "v0.12.0"
DefaultCniVersion = "v0.9.1"
DefaultCniVersion_v_1_1_1 = "v1.1.1"
DefaultCiliumVersion = "v1.11.6"
DefaultKubeovnVersion = "v1.10.6"
DefalutMultusVersion = "v3.8"
DefaultHelmVersion = "v3.9.0"
DefaultDockerComposeVersion = "v2.2.2"
DefaultRegistryVersion = "2"
DefaultHarborVersion = "v2.5.3"
DefaultUbuntu24AppArmonVersion = "4.0.1"
DefaultSocatVersion = "1.7.3.4"
DefaultFlexVersion = "2.6.4"
DefaultConntrackVersion = "1.4.1"
DefaultOssUtilVersion = "v1.7.18"
DefaultCosUtilVersion = "v1.0.2"
DefaultMinioVersion = "RELEASE.2023-05-04T21-44-30Z"
DefaultMinioOperatorVersion = "0.0.1"
DefaultRedisVersion = "5.0.14"
DefaultJuiceFsVersion = "v11.1.0"
CudaKeyringVersion1_0 = "1.0"
CudaKeyringVersion1_1 = "1.1"
DefaultVeleroVersion = "v1.11.3"
DefaultWSLInstallPackageVersion = "2.3.26.0"
DefaultShutdownGracePeriod = "30s"
DefaultShutdownGracePeriodCriticalPods = "10s"
DefaultMaxPods = 200
DefaultPodPidsLimit = 10000
DefaultNodeCidrMaskSize = 24
DefaultIPIPMode = "Always"
DefaultVXLANMode = "Never"
DefaultVethMTU = 0
DefaultBackendMode = "vxlan"
DefaultProxyMode = "ipvs"
DefaultCrioEndpoint = "unix:///var/run/crio/crio.sock"
DefaultContainerdEndpoint = "unix:///run/containerd/containerd.sock"
DefaultIsulaEndpoint = "unix:///var/run/isulad.sock"
Etcd = "etcd"
Master = "master"
ControlPlane = "control-plane"
Worker = "worker"
K8s = "k8s"
Registry = "registry"
DefaultEtcdBackupDir = "/var/backups/kube_etcd"
DefaultEtcdBackupPeriod = 30
DefaultKeepBackNumber = 5
DefaultEtcdBackupScriptDir = "/usr/local/bin/kube-scripts"
DefaultPodGateway = "10.233.64.1"
DefaultJoinCIDR = "100.64.0.0/16"
DefaultNetworkType = "geneve"
DefaultTunnelType = "geneve"
DefaultPodNicType = "veth-pair"
DefaultModules = "kube_ovn_fastpath.ko"
DefaultRPMs = "openvswitch-kmod"
DefaultVlanID = "100"
DefaultOvnLabel = "node-role.kubernetes.io/control-plane"
DefaultDPDKVersion = "19.11"
DefaultDNSAddress = "114.114.114.114"
DefaultDpdkTunnelIface = "br-phy"
DefaultCNIConfigPriority = "01"
Docker = "docker"
Containerd = "containerd"
Crio = "crio"
Isula = "isula"
Haproxy = "haproxy"
Kubevip = "kube-vip"
DefaultKubeVipMode = "ARP"
)
func (cfg *ClusterSpec) SetDefaultClusterSpec(incluster bool, macos bool) (*ClusterSpec, map[string][]*KubeHost) {
clusterCfg := ClusterSpec{}
clusterCfg.Hosts = SetDefaultHostsCfg(cfg)
clusterCfg.RoleGroups = cfg.RoleGroups
clusterCfg.Etcd = SetDefaultEtcdCfg(cfg, macos)
roleGroups := clusterCfg.GroupHosts()
clusterCfg.ControlPlaneEndpoint = SetDefaultLBCfg(cfg, roleGroups[Master], incluster)
clusterCfg.Network = SetDefaultNetworkCfg(cfg)
clusterCfg.System = cfg.System
clusterCfg.Kubernetes = SetDefaultClusterCfg(cfg)
clusterCfg.Registry = cfg.Registry
clusterCfg.Addons = cfg.Addons
clusterCfg.KubeSphere = cfg.KubeSphere
if cfg.Kubernetes.ClusterName == "" {
clusterCfg.Kubernetes.ClusterName = DefaultClusterName
}
if cfg.Kubernetes.Version == "" {
clusterCfg.Kubernetes.Version = DefaultKubeVersion
}
if cfg.Kubernetes.ShutdownGracePeriod == "" {
clusterCfg.Kubernetes.ShutdownGracePeriod = DefaultShutdownGracePeriod
}
if cfg.Kubernetes.ShutdownGracePeriodCriticalPods == "" {
clusterCfg.Kubernetes.ShutdownGracePeriodCriticalPods = DefaultShutdownGracePeriodCriticalPods
}
if cfg.Kubernetes.MaxPods == 0 {
clusterCfg.Kubernetes.MaxPods = DefaultMaxPods
}
if cfg.Kubernetes.PodPidsLimit == 0 {
clusterCfg.Kubernetes.PodPidsLimit = DefaultPodPidsLimit
}
if cfg.Kubernetes.NodeCidrMaskSize == 0 {
clusterCfg.Kubernetes.NodeCidrMaskSize = DefaultNodeCidrMaskSize
}
if cfg.Kubernetes.ProxyMode == "" {
clusterCfg.Kubernetes.ProxyMode = DefaultProxyMode
}
return &clusterCfg, roleGroups
}
func SetDefaultHostsCfg(cfg *ClusterSpec) []HostCfg {
var hostCfg []HostCfg
if len(cfg.Hosts) == 0 {
return nil
}
for _, host := range cfg.Hosts {
if len(host.Address) == 0 && len(host.InternalAddress) > 0 {
host.Address = host.InternalAddress
}
if len(host.InternalAddress) == 0 && len(host.Address) > 0 {
host.InternalAddress = host.Address
}
if host.User == "" {
host.User = "root"
}
if host.Port == 0 {
host.Port = DefaultSSHPort
}
if host.PrivateKey == "" {
if host.Password == "" && host.PrivateKeyPath == "" {
host.PrivateKeyPath = "~/.ssh/id_rsa"
}
if host.PrivateKeyPath != "" && strings.HasPrefix(strings.TrimSpace(host.PrivateKeyPath), "~/") {
homeDir, _ := util.Home()
host.PrivateKeyPath = strings.Replace(host.PrivateKeyPath, "~/", fmt.Sprintf("%s/", homeDir), 1)
}
}
if host.Arch == "" {
host.Arch = DefaultArch
}
if host.Timeout == nil {
var timeout int64
timeout = DefaultSSHTimeout
host.Timeout = &timeout
}
hostCfg = append(hostCfg, host)
}
return hostCfg
}
func SetDefaultLBCfg(cfg *ClusterSpec, masterGroup []*KubeHost, incluster bool) ControlPlaneEndpoint {
if !incluster {
//The detection is not an HA environment, and the address at LB does not need input
if len(masterGroup) == 1 && cfg.ControlPlaneEndpoint.Address != "" {
fmt.Println("When the environment is not HA, the LB address does not need to be entered, so delete the corresponding value.")
os.Exit(0)
}
//Check whether LB should be configured
if len(masterGroup) >= 3 && !cfg.ControlPlaneEndpoint.IsInternalLBEnabled() && cfg.ControlPlaneEndpoint.Address == "" {
fmt.Println("When the environment has at least three masters, You must set the value of the LB address or enable the internal loadbalancer.")
os.Exit(0)
}
// Check whether LB address and the internal LB are both enabled
if cfg.ControlPlaneEndpoint.IsInternalLBEnabled() && cfg.ControlPlaneEndpoint.Address != "" {
fmt.Println("You cannot set up the internal load balancer and the LB address at the same time.")
os.Exit(0)
}
}
if cfg.ControlPlaneEndpoint.Address == "" || cfg.ControlPlaneEndpoint.Address == "127.0.0.1" {
cfg.ControlPlaneEndpoint.Address = masterGroup[0].InternalAddress
}
if cfg.ControlPlaneEndpoint.Domain == "" {
cfg.ControlPlaneEndpoint.Domain = DefaultLBDomain
}
if cfg.ControlPlaneEndpoint.Port == 0 {
cfg.ControlPlaneEndpoint.Port = DefaultLBPort
}
if cfg.ControlPlaneEndpoint.KubeVip.Mode == "" {
cfg.ControlPlaneEndpoint.KubeVip.Mode = DefaultKubeVipMode
}
defaultLbCfg := cfg.ControlPlaneEndpoint
return defaultLbCfg
}
func SetDefaultNetworkCfg(cfg *ClusterSpec) NetworkConfig {
if cfg.Network.Plugin == "" {
cfg.Network.Plugin = DefaultNetworkPlugin
}
if cfg.Network.KubePodsCIDR == "" {
cfg.Network.KubePodsCIDR = DefaultPodsCIDR
}
if cfg.Network.KubeServiceCIDR == "" {
cfg.Network.KubeServiceCIDR = DefaultServiceCIDR
}
if cfg.Network.Calico.IPIPMode == "" {
cfg.Network.Calico.IPIPMode = DefaultIPIPMode
}
if cfg.Network.Calico.VXLANMode == "" {
cfg.Network.Calico.VXLANMode = DefaultVXLANMode
}
if cfg.Network.Calico.VethMTU == 0 {
cfg.Network.Calico.VethMTU = DefaultVethMTU
}
if cfg.Network.Flannel.BackendMode == "" {
cfg.Network.Flannel.BackendMode = DefaultBackendMode
}
// kube-ovn default config
if cfg.Network.Kubeovn.KubeOvnController.PodGateway == "" {
cfg.Network.Kubeovn.KubeOvnController.PodGateway = DefaultPodGateway
}
if cfg.Network.Kubeovn.JoinCIDR == "" {
cfg.Network.Kubeovn.JoinCIDR = DefaultJoinCIDR
}
if cfg.Network.Kubeovn.Label == "" {
cfg.Network.Kubeovn.Label = DefaultOvnLabel
}
if cfg.Network.Kubeovn.KubeOvnController.VlanID == "" {
cfg.Network.Kubeovn.KubeOvnController.VlanID = DefaultVlanID
}
if cfg.Network.Kubeovn.KubeOvnController.NetworkType == "" {
cfg.Network.Kubeovn.KubeOvnController.NetworkType = DefaultNetworkType
}
if cfg.Network.Kubeovn.TunnelType == "" {
cfg.Network.Kubeovn.TunnelType = DefaultTunnelType
}
if cfg.Network.Kubeovn.KubeOvnController.PodNicType == "" {
cfg.Network.Kubeovn.KubeOvnController.PodNicType = DefaultPodNicType
}
if cfg.Network.Kubeovn.KubeOvnCni.Modules == "" {
cfg.Network.Kubeovn.KubeOvnCni.Modules = DefaultModules
}
if cfg.Network.Kubeovn.KubeOvnCni.RPMs == "" {
cfg.Network.Kubeovn.KubeOvnCni.RPMs = DefaultRPMs
}
if cfg.Network.Kubeovn.KubeOvnPinger.PingerExternalAddress == "" {
cfg.Network.Kubeovn.KubeOvnPinger.PingerExternalAddress = DefaultDNSAddress
}
if cfg.Network.Kubeovn.Dpdk.DpdkVersion == "" {
cfg.Network.Kubeovn.Dpdk.DpdkVersion = DefaultDPDKVersion
}
if cfg.Network.Kubeovn.Dpdk.DpdkTunnelIface == "" {
cfg.Network.Kubeovn.Dpdk.DpdkTunnelIface = DefaultDpdkTunnelIface
}
if cfg.Network.Kubeovn.KubeOvnCni.CNIConfigPriority == "" {
cfg.Network.Kubeovn.KubeOvnCni.CNIConfigPriority = DefaultCNIConfigPriority
}
defaultNetworkCfg := cfg.Network
return defaultNetworkCfg
}
func SetDefaultClusterCfg(cfg *ClusterSpec) Kubernetes {
if cfg.Kubernetes.Version == "" {
cfg.Kubernetes.Version = DefaultKubeVersion
} else {
s := strings.Split(cfg.Kubernetes.Version, "-")
if len(s) > 1 {
cfg.Kubernetes.Version = s[0]
cfg.Kubernetes.Type = s[1]
}
}
if cfg.Kubernetes.Type == "" {
cfg.Kubernetes.Type = "kubernetes"
}
if cfg.Kubernetes.ClusterName == "" {
cfg.Kubernetes.ClusterName = DefaultClusterName
}
if cfg.Kubernetes.DNSDomain == "" {
cfg.Kubernetes.DNSDomain = DefaultDNSDomain
}
if cfg.Kubernetes.ContainerManager == "" {
cfg.Kubernetes.ContainerManager = Docker
}
if cfg.Kubernetes.ContainerRuntimeEndpoint == "" {
switch cfg.Kubernetes.ContainerManager {
case Docker:
cfg.Kubernetes.ContainerRuntimeEndpoint = ""
case Crio:
cfg.Kubernetes.ContainerRuntimeEndpoint = DefaultCrioEndpoint
case Containerd:
cfg.Kubernetes.ContainerRuntimeEndpoint = DefaultContainerdEndpoint
case Isula:
cfg.Kubernetes.ContainerRuntimeEndpoint = DefaultIsulaEndpoint
default:
cfg.Kubernetes.ContainerRuntimeEndpoint = ""
}
}
defaultClusterCfg := cfg.Kubernetes
return defaultClusterCfg
}
func SetDefaultEtcdCfg(cfg *ClusterSpec, macos bool) EtcdCluster {
if macos {
cfg.Etcd.Type = MiniKube
} else if cfg.Etcd.Type == "" || ((cfg.Kubernetes.Type == "k3s" || (len(strings.Split(cfg.Kubernetes.Version, "-")) > 1) && strings.Split(cfg.Kubernetes.Version, "-")[1] == "k3s") && cfg.Etcd.Type == Kubeadm) {
cfg.Etcd.Type = KubeKey
}
if cfg.Etcd.BackupDir == "" {
cfg.Etcd.BackupDir = DefaultEtcdBackupDir
}
if cfg.Etcd.BackupPeriod == 0 {
cfg.Etcd.BackupPeriod = DefaultEtcdBackupPeriod
}
if cfg.Etcd.KeepBackupNumber == 0 {
cfg.Etcd.KeepBackupNumber = DefaultKeepBackNumber
}
if cfg.Etcd.BackupScriptDir == "" {
cfg.Etcd.BackupScriptDir = DefaultEtcdBackupScriptDir
}
return cfg.Etcd
}

View File

@@ -0,0 +1,49 @@
/*
Copyright 2021 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha2
const (
KubeKey = "kubekey"
Kubeadm = "kubeadm"
External = "external"
MiniKube = "minikube"
)
type EtcdCluster struct {
// Type of etcd cluster, can be set to 'kubekey' 'kubeadm' 'external'
Type string `yaml:"type" json:"type,omitempty"`
// ExternalEtcd describes how to connect to an external etcd cluster when type is set to external
External ExternalEtcd `yaml:"external" json:"external,omitempty"`
BackupDir string `yaml:"backupDir" json:"backupDir,omitempty"`
BackupPeriod int `yaml:"backupPeriod" json:"backupPeriod,omitempty"`
KeepBackupNumber int `yaml:"keepBackupNumber" json:"keepBackupNumber,omitempty"`
BackupScriptDir string `yaml:"backupScript" json:"backupScript,omitempty"`
}
// ExternalEtcd describes how to connect to an external etcd cluster
// KubeKey, Kubeadm and External are mutually exclusive
type ExternalEtcd struct {
// Endpoints of etcd members. Useful for using external etcd.
// If not provided, kubeadm will run etcd in a static pod.
Endpoints []string `yaml:"endpoints" json:"endpoints,omitempty"`
// CAFile is an SSL Certificate Authority file used to secure etcd communication.
CAFile string `yaml:"caFile" json:"caFile,omitempty"`
// CertFile is an SSL certification file used to secure etcd communication.
CertFile string `yaml:"certFile" json:"certFile,omitempty"`
// KeyFile is an SSL key file used to secure etcd communication.
KeyFile string `yaml:"keyFile" json:"keyFile,omitempty"`
}

View File

@@ -0,0 +1,23 @@
/*
Copyright 2021 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha2
type Event struct {
Step string `yaml:"step" json:"step,omitempty"`
Status string `yaml:"status" json:"status,omitempty"`
Message string `yaml:"message" json:"message,omitempty"`
}

View File

@@ -0,0 +1,43 @@
/*
Copyright 2021 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package v1alpha2 contains API Schema definitions for the kubekey v1alpha2 API group
// +kubebuilder:object:generate=true
// +groupName=kubekey.kubesphere.io
package v1alpha2
import (
"k8s.io/apimachinery/pkg/runtime/schema"
"sigs.k8s.io/controller-runtime/pkg/scheme"
)
// SchemeGroupVersion is group version used to register these objects
var SchemeGroupVersion = GroupVersion
var (
// GroupVersion is group version used to register these objects
GroupVersion = schema.GroupVersion{Group: "kubekey.kubesphere.io", Version: "v1alpha2"}
// SchemeBuilder is used to add go types to the GroupVersionKind scheme
SchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion}
// AddToScheme adds the types in this group-version to the given scheme.
AddToScheme = SchemeBuilder.AddToScheme
)
func Resource(resource string) schema.GroupResource {
return GroupVersion.WithResource(resource).GroupResource()
}

View File

@@ -0,0 +1,92 @@
/*
Copyright 2021 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha2
import "k8s.io/apimachinery/pkg/runtime"
// Kubernetes contains the configuration for the cluster
type Kubernetes struct {
Type string `yaml:"type" json:"type,omitempty"`
Version string `yaml:"version" json:"version,omitempty"`
ClusterName string `yaml:"clusterName" json:"clusterName,omitempty"`
DNSDomain string `yaml:"dnsDomain" json:"dnsDomain,omitempty"`
DisableKubeProxy bool `yaml:"disableKubeProxy" json:"disableKubeProxy,omitempty"`
MasqueradeAll bool `yaml:"masqueradeAll" json:"masqueradeAll,omitempty"`
ShutdownGracePeriod string `yaml:"shutdownGracePeriod" json:"shutdownGracePeriod,omitempty"`
ShutdownGracePeriodCriticalPods string `json:"shutdownGracePeriodCriticalPods,omitempty"`
MaxPods int `yaml:"maxPods" json:"maxPods,omitempty"`
PodPidsLimit int `yaml:"podPidsLimit" json:"podPidsLimit,omitempty"`
NodeCidrMaskSize int `yaml:"nodeCidrMaskSize" json:"nodeCidrMaskSize,omitempty"`
ApiserverCertExtraSans []string `yaml:"apiserverCertExtraSans" json:"apiserverCertExtraSans,omitempty"`
ProxyMode string `yaml:"proxyMode" json:"proxyMode,omitempty"`
AutoRenewCerts *bool `yaml:"autoRenewCerts" json:"autoRenewCerts,omitempty"`
// +optional
Nodelocaldns *bool `yaml:"nodelocaldns" json:"nodelocaldns,omitempty"`
ContainerManager string `yaml:"containerManager" json:"containerManager,omitempty"`
ContainerRuntimeEndpoint string `yaml:"containerRuntimeEndpoint" json:"containerRuntimeEndpoint,omitempty"`
NodeFeatureDiscovery NodeFeatureDiscovery `yaml:"nodeFeatureDiscovery" json:"nodeFeatureDiscovery,omitempty"`
Kata Kata `yaml:"kata" json:"kata,omitempty"`
ApiServerArgs []string `yaml:"apiserverArgs" json:"apiserverArgs,omitempty"`
ControllerManagerArgs []string `yaml:"controllerManagerArgs" json:"controllerManagerArgs,omitempty"`
SchedulerArgs []string `yaml:"schedulerArgs" json:"schedulerArgs,omitempty"`
KubeletArgs []string `yaml:"kubeletArgs" json:"kubeletArgs,omitempty"`
KubeProxyArgs []string `yaml:"kubeProxyArgs" json:"kubeProxyArgs,omitempty"`
FeatureGates map[string]bool `yaml:"featureGates" json:"featureGates,omitempty"`
KubeletConfiguration runtime.RawExtension `yaml:"kubeletConfiguration" json:"kubeletConfiguration,omitempty"`
KubeProxyConfiguration runtime.RawExtension `yaml:"kubeProxyConfiguration" json:"kubeProxyConfiguration,omitempty"`
}
// Kata contains the configuration for the kata in cluster
type Kata struct {
Enabled *bool `yaml:"enabled" json:"enabled,omitempty"`
}
// NodeFeatureDiscovery contains the configuration for the node-feature-discovery in cluster
type NodeFeatureDiscovery struct {
Enabled *bool `yaml:"enabled" json:"enabled,omitempty"`
}
// EnableNodelocaldns is used to determine whether to deploy nodelocaldns.
func (k *Kubernetes) EnableNodelocaldns() bool {
if k.Nodelocaldns == nil {
return true
}
return *k.Nodelocaldns
}
// EnableKataDeploy is used to determine whether to deploy kata.
func (k *Kubernetes) EnableKataDeploy() bool {
if k.Kata.Enabled == nil {
return false
}
return *k.Kata.Enabled
}
// EnableNodeFeatureDiscovery is used to determine whether to deploy node-feature-discovery.
func (k *Kubernetes) EnableNodeFeatureDiscovery() bool {
if k.NodeFeatureDiscovery.Enabled == nil {
return false
}
return *k.NodeFeatureDiscovery.Enabled
}
func (k *Kubernetes) EnableAutoRenewCerts() bool {
if k.AutoRenewCerts == nil {
return false
}
return *k.AutoRenewCerts
}

View File

@@ -0,0 +1,144 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha2
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
)
// EDIT THIS FILE! THIS IS SCAFFOLDING FOR YOU TO OWN!
// NOTE: json tags are required. Any new fields you add must have json tags for the fields to be serialized.
type Iso struct {
LocalPath string `yaml:"localPath" json:"localPath"`
Url string `yaml:"url" json:"url"`
}
type Repository struct {
Iso Iso `yaml:"iso" json:"iso"`
}
type OperatingSystem struct {
Arch string `yaml:"arch" json:"arch"`
Type string `yaml:"type" json:"type,omitempty"`
Id string `yaml:"id" json:"id"`
Version string `yaml:"version" json:"version"`
OsImage string `yaml:"osImage" json:"osImage"`
Repository Repository `yaml:"repository" json:"repository"`
}
type KubernetesDistribution struct {
Type string `yaml:"type" json:"type"`
Version string `yaml:"version" json:"version"`
}
type Helm struct {
Version string `yaml:"version" json:"version"`
}
type CNI struct {
Version string `yaml:"version" json:"version"`
}
type ETCD struct {
Version string `yaml:"version" json:"version"`
}
type DockerManifest struct {
Version string `yaml:"version" json:"version"`
}
type Crictl struct {
Version string `yaml:"version" json:"version"`
}
type ContainerRuntime struct {
Type string `yaml:"type" json:"type"`
Version string `yaml:"version" json:"version"`
}
type DockerRegistry struct {
Version string `yaml:"version" json:"version"`
}
type Harbor struct {
Version string `yaml:"version" json:"version"`
}
type DockerCompose struct {
Version string `yaml:"version" json:"version"`
}
type Components struct {
Helm Helm `yaml:"helm" json:"helm"`
CNI CNI `yaml:"cni" json:"cni"`
ETCD ETCD `yaml:"etcd" json:"etcd"`
ContainerRuntimes []ContainerRuntime `yaml:"containerRuntimes" json:"containerRuntimes"`
Crictl Crictl `yaml:"crictl" json:"crictl,omitempty"`
DockerRegistry DockerRegistry `yaml:"docker-registry" json:"docker-registry"`
Harbor Harbor `yaml:"harbor" json:"harbor"`
DockerCompose DockerCompose `yaml:"docker-compose" json:"docker-compose"`
}
type ManifestRegistry struct {
Auths runtime.RawExtension `yaml:"auths" json:"auths,omitempty"`
}
// ManifestSpec defines the desired state of Manifest
type ManifestSpec struct {
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
// Important: Run "make" to regenerate code after modifying this file
Arches []string `yaml:"arches" json:"arches"`
OperatingSystems []OperatingSystem `yaml:"operatingSystems" json:"operatingSystems"`
KubernetesDistributions []KubernetesDistribution `yaml:"kubernetesDistributions" json:"kubernetesDistributions"`
Components Components `yaml:"components" json:"components"`
Images []string `yaml:"images" json:"images"`
ManifestRegistry ManifestRegistry `yaml:"registry" json:"registry"`
}
// ManifestStatus defines the observed state of Manifest
type ManifestStatus struct {
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
// Important: Run "make" to regenerate code after modifying this file
}
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
// Manifest is the Schema for the manifests API
type Manifest struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ManifestSpec `json:"spec,omitempty"`
Status ManifestStatus `json:"status,omitempty"`
}
//+kubebuilder:object:root=true
// ManifestList contains a list of Manifest
type ManifestList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []Manifest `json:"items"`
}
func init() {
SchemeBuilder.Register(&Manifest{}, &ManifestList{})
}

View File

@@ -0,0 +1,135 @@
/*
Copyright 2021 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha2
type NetworkConfig struct {
Plugin string `yaml:"plugin" json:"plugin,omitempty"`
KubePodsCIDR string `yaml:"kubePodsCIDR" json:"kubePodsCIDR,omitempty"`
KubeServiceCIDR string `yaml:"kubeServiceCIDR" json:"kubeServiceCIDR,omitempty"`
Calico CalicoCfg `yaml:"calico" json:"calico,omitempty"`
Flannel FlannelCfg `yaml:"flannel" json:"flannel,omitempty"`
Kubeovn KubeovnCfg `yaml:"kubeovn" json:"kubeovn,omitempty"`
MultusCNI MultusCNI `yaml:"multusCNI" json:"multusCNI,omitempty"`
}
type CalicoCfg struct {
IPIPMode string `yaml:"ipipMode" json:"ipipMode,omitempty"`
VXLANMode string `yaml:"vxlanMode" json:"vxlanMode,omitempty"`
VethMTU int `yaml:"vethMTU" json:"vethMTU,omitempty"`
}
type FlannelCfg struct {
BackendMode string `yaml:"backendMode" json:"backendMode,omitempty"`
Directrouting bool `yaml:"directRouting" json:"directRouting,omitempty"`
}
type KubeovnCfg struct {
EnableSSL bool `yaml:"enableSSL" json:"enableSSL,omitempty"`
JoinCIDR string `yaml:"joinCIDR" json:"joinCIDR,omitempty"`
Label string `yaml:"label" json:"label,omitempty"`
TunnelType string `yaml:"tunnelType" json:"tunnelType,omitempty"`
SvcYamlIpfamilypolicy string `yaml:"svcYamlIpfamilypolicy" json:"svcYamlIpfamilypolicy,omitempty"`
Dpdk Dpdk `yaml:"dpdk" json:"dpdk,omitempty"`
OvsOvn OvsOvn `yaml:"ovs-ovn" json:"ovs-ovn,omitempty"`
KubeOvnController KubeOvnController `yaml:"kube-ovn-controller" json:"kube-ovn-controller,omitempty"`
KubeOvnCni KubeOvnCni `yaml:"kube-ovn-cni" json:"kube-ovn-cni,omitempty"`
KubeOvnPinger KubeOvnPinger `yaml:"kube-ovn-pinger" json:"kube-ovn-pinger,omitempty"`
}
type Dpdk struct {
DpdkMode bool `yaml:"dpdkMode" json:"dpdkMode,omitempty"`
DpdkTunnelIface string `yaml:"dpdkTunnelIface" json:"dpdkTunnelIface,omitempty"`
DpdkVersion string `yaml:"dpdkVersion" json:"dpdkVersion,omitempty"`
}
type OvsOvn struct {
HwOffload bool `yaml:"hwOffload" json:"hwOffload,omitempty"`
}
type KubeOvnController struct {
PodGateway string `yaml:"podGateway" json:"podGateway,omitempty"`
CheckGateway *bool `yaml:"checkGateway" json:"checkGateway,omitempty"`
LogicalGateway bool `yaml:"logicalGateway" json:"logicalGateway,omitempty"`
ExcludeIps string `yaml:"excludeIps" json:"excludeIps,omitempty"`
NetworkType string `yaml:"networkType" json:"networkType,omitempty"`
VlanInterfaceName string `yaml:"vlanInterfaceName" json:"vlanInterfaceName,omitempty"`
VlanID string `yaml:"vlanID" json:"vlanID,omitempty"`
PodNicType string `yaml:"podNicType" json:"podNicType,omitempty"`
EnableLB *bool `yaml:"enableLB" json:"enableLB,omitempty"`
EnableNP *bool `yaml:"enableNP" json:"enableNP,omitempty"`
EnableEipSnat *bool `yaml:"enableEipSnat" json:"enableEipSnat,omitempty"`
EnableExternalVPC *bool `yaml:"enableExternalVPC" json:"enableExternalVPC,omitempty"`
}
type KubeOvnCni struct {
EnableMirror bool `yaml:"enableMirror" json:"enableMirror,omitempty"`
Iface string `yaml:"iface" json:"iface,omitempty"`
CNIConfigPriority string `yaml:"CNIConfigPriority" json:"CNIConfigPriority,omitempty"`
Modules string `yaml:"modules" json:"modules,omitempty"`
RPMs string `yaml:"RPMs" json:"RPMs,omitempty"`
}
type KubeOvnPinger struct {
PingerExternalAddress string `yaml:"pingerExternalAddress" json:"pingerExternalAddress,omitempty"`
PingerExternalDomain string `yaml:"pingerExternalDomain" json:"pingerExternalDomain,omitempty"`
}
func (k *KubeovnCfg) KubeovnCheckGateway() bool {
if k.KubeOvnController.CheckGateway == nil {
return true
}
return *k.KubeOvnController.CheckGateway
}
func (k *KubeovnCfg) KubeovnEnableLB() bool {
if k.KubeOvnController.EnableLB == nil {
return true
}
return *k.KubeOvnController.EnableLB
}
func (k *KubeovnCfg) KubeovnEnableNP() bool {
if k.KubeOvnController.EnableNP == nil {
return true
}
return *k.KubeOvnController.EnableNP
}
func (k *KubeovnCfg) KubeovnEnableEipSnat() bool {
if k.KubeOvnController.EnableEipSnat == nil {
return true
}
return *k.KubeOvnController.EnableEipSnat
}
func (k *KubeovnCfg) KubeovnEnableExternalVPC() bool {
if k.KubeOvnController.EnableExternalVPC == nil {
return true
}
return *k.KubeOvnController.EnableExternalVPC
}
type MultusCNI struct {
Enabled *bool `yaml:"enabled" json:"enabled,omitempty"`
}
func (n *NetworkConfig) EnableMultusCNI() bool {
if n.MultusCNI.Enabled == nil {
return false
}
return *n.MultusCNI.Enabled
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,110 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
package versioned
import (
"fmt"
kubekeyv1alpha1 "bytetrade.io/web3os/installer/clients/clientset/versioned/typed/kubekey/v1alpha1"
kubekeyv1alpha2 "bytetrade.io/web3os/installer/clients/clientset/versioned/typed/kubekey/v1alpha2"
discovery "k8s.io/client-go/discovery"
rest "k8s.io/client-go/rest"
flowcontrol "k8s.io/client-go/util/flowcontrol"
)
type Interface interface {
Discovery() discovery.DiscoveryInterface
KubekeyV1alpha1() kubekeyv1alpha1.KubekeyV1alpha1Interface
KubekeyV1alpha2() kubekeyv1alpha2.KubekeyV1alpha2Interface
}
// Clientset contains the clients for groups. Each group has exactly one
// version included in a Clientset.
type Clientset struct {
*discovery.DiscoveryClient
kubekeyV1alpha1 *kubekeyv1alpha1.KubekeyV1alpha1Client
kubekeyV1alpha2 *kubekeyv1alpha2.KubekeyV1alpha2Client
}
// KubekeyV1alpha1 retrieves the KubekeyV1alpha1Client
func (c *Clientset) KubekeyV1alpha1() kubekeyv1alpha1.KubekeyV1alpha1Interface {
return c.kubekeyV1alpha1
}
// KubekeyV1alpha2 retrieves the KubekeyV1alpha2Client
func (c *Clientset) KubekeyV1alpha2() kubekeyv1alpha2.KubekeyV1alpha2Interface {
return c.kubekeyV1alpha2
}
// Discovery retrieves the DiscoveryClient
func (c *Clientset) Discovery() discovery.DiscoveryInterface {
if c == nil {
return nil
}
return c.DiscoveryClient
}
// NewForConfig creates a new Clientset for the given config.
// If config's RateLimiter is not set and QPS and Burst are acceptable,
// NewForConfig will generate a rate-limiter in configShallowCopy.
func NewForConfig(c *rest.Config) (*Clientset, error) {
configShallowCopy := *c
if configShallowCopy.RateLimiter == nil && configShallowCopy.QPS > 0 {
if configShallowCopy.Burst <= 0 {
return nil, fmt.Errorf("burst is required to be greater than 0 when RateLimiter is not set and QPS is set to greater than 0")
}
configShallowCopy.RateLimiter = flowcontrol.NewTokenBucketRateLimiter(configShallowCopy.QPS, configShallowCopy.Burst)
}
var cs Clientset
var err error
cs.kubekeyV1alpha1, err = kubekeyv1alpha1.NewForConfig(&configShallowCopy)
if err != nil {
return nil, err
}
cs.kubekeyV1alpha2, err = kubekeyv1alpha2.NewForConfig(&configShallowCopy)
if err != nil {
return nil, err
}
cs.DiscoveryClient, err = discovery.NewDiscoveryClientForConfig(&configShallowCopy)
if err != nil {
return nil, err
}
return &cs, nil
}
// NewForConfigOrDie creates a new Clientset for the given config and
// panics if there is an error in the config.
func NewForConfigOrDie(c *rest.Config) *Clientset {
var cs Clientset
cs.kubekeyV1alpha1 = kubekeyv1alpha1.NewForConfigOrDie(c)
cs.kubekeyV1alpha2 = kubekeyv1alpha2.NewForConfigOrDie(c)
cs.DiscoveryClient = discovery.NewDiscoveryClientForConfigOrDie(c)
return &cs
}
// New creates a new Clientset for the given RESTClient.
func New(c rest.Interface) *Clientset {
var cs Clientset
cs.kubekeyV1alpha1 = kubekeyv1alpha1.New(c)
cs.kubekeyV1alpha2 = kubekeyv1alpha2.New(c)
cs.DiscoveryClient = discovery.NewDiscoveryClient(c)
return &cs
}

View File

@@ -0,0 +1,19 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
// This package has the automatically generated clientset.
package versioned

View File

@@ -0,0 +1,91 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
package fake
import (
clientset "bytetrade.io/web3os/installer/clients/clientset/versioned"
kubekeyv1alpha1 "bytetrade.io/web3os/installer/clients/clientset/versioned/typed/kubekey/v1alpha1"
fakekubekeyv1alpha1 "bytetrade.io/web3os/installer/clients/clientset/versioned/typed/kubekey/v1alpha1/fake"
kubekeyv1alpha2 "bytetrade.io/web3os/installer/clients/clientset/versioned/typed/kubekey/v1alpha2"
fakekubekeyv1alpha2 "bytetrade.io/web3os/installer/clients/clientset/versioned/typed/kubekey/v1alpha2/fake"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/client-go/discovery"
fakediscovery "k8s.io/client-go/discovery/fake"
"k8s.io/client-go/testing"
)
// NewSimpleClientset returns a clientset that will respond with the provided objects.
// It's backed by a very simple object tracker that processes creates, updates and deletions as-is,
// without applying any validations and/or defaults. It shouldn't be considered a replacement
// for a real clientset and is mostly useful in simple unit tests.
func NewSimpleClientset(objects ...runtime.Object) *Clientset {
o := testing.NewObjectTracker(scheme, codecs.UniversalDecoder())
for _, obj := range objects {
if err := o.Add(obj); err != nil {
panic(err)
}
}
cs := &Clientset{tracker: o}
cs.discovery = &fakediscovery.FakeDiscovery{Fake: &cs.Fake}
cs.AddReactor("*", "*", testing.ObjectReaction(o))
cs.AddWatchReactor("*", func(action testing.Action) (handled bool, ret watch.Interface, err error) {
gvr := action.GetResource()
ns := action.GetNamespace()
watch, err := o.Watch(gvr, ns)
if err != nil {
return false, nil, err
}
return true, watch, nil
})
return cs
}
// Clientset implements clientset.Interface. Meant to be embedded into a
// struct to get a default implementation. This makes faking out just the method
// you want to test easier.
type Clientset struct {
testing.Fake
discovery *fakediscovery.FakeDiscovery
tracker testing.ObjectTracker
}
func (c *Clientset) Discovery() discovery.DiscoveryInterface {
return c.discovery
}
func (c *Clientset) Tracker() testing.ObjectTracker {
return c.tracker
}
var (
_ clientset.Interface = &Clientset{}
_ testing.FakeClient = &Clientset{}
)
// KubekeyV1alpha1 retrieves the KubekeyV1alpha1Client
func (c *Clientset) KubekeyV1alpha1() kubekeyv1alpha1.KubekeyV1alpha1Interface {
return &fakekubekeyv1alpha1.FakeKubekeyV1alpha1{Fake: &c.Fake}
}
// KubekeyV1alpha2 retrieves the KubekeyV1alpha2Client
func (c *Clientset) KubekeyV1alpha2() kubekeyv1alpha2.KubekeyV1alpha2Interface {
return &fakekubekeyv1alpha2.FakeKubekeyV1alpha2{Fake: &c.Fake}
}

View File

@@ -0,0 +1,19 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
// This package has the automatically generated fake clientset.
package fake

View File

@@ -0,0 +1,57 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
package fake
import (
kubekeyv1alpha1 "bytetrade.io/web3os/installer/apis/kubekey/v1alpha1"
kubekeyv1alpha2 "bytetrade.io/web3os/installer/apis/kubekey/v1alpha2"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
runtime "k8s.io/apimachinery/pkg/runtime"
schema "k8s.io/apimachinery/pkg/runtime/schema"
serializer "k8s.io/apimachinery/pkg/runtime/serializer"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
)
var scheme = runtime.NewScheme()
var codecs = serializer.NewCodecFactory(scheme)
var localSchemeBuilder = runtime.SchemeBuilder{
kubekeyv1alpha1.AddToScheme,
kubekeyv1alpha2.AddToScheme,
}
// AddToScheme adds all types of this clientset into the given scheme. This allows composition
// of clientsets, like in:
//
// import (
// "k8s.io/client-go/kubernetes"
// clientsetscheme "k8s.io/client-go/kubernetes/scheme"
// aggregatorclientsetscheme "k8s.io/kube-aggregator/pkg/client/clientset_generated/clientset/scheme"
// )
//
// kclientset, _ := kubernetes.NewForConfig(c)
// _ = aggregatorclientsetscheme.AddToScheme(clientsetscheme.Scheme)
//
// After this, RawExtensions in Kubernetes types will serialize kube-aggregator types
// correctly.
var AddToScheme = localSchemeBuilder.AddToScheme
func init() {
v1.AddToGroupVersion(scheme, schema.GroupVersion{Version: "v1"})
utilruntime.Must(AddToScheme(scheme))
}

View File

@@ -0,0 +1,19 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
// This package contains the scheme of the automatically generated clientset.
package scheme

View File

@@ -0,0 +1,57 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
package scheme
import (
kubekeyv1alpha1 "bytetrade.io/web3os/installer/apis/kubekey/v1alpha1"
kubekeyv1alpha2 "bytetrade.io/web3os/installer/apis/kubekey/v1alpha2"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
runtime "k8s.io/apimachinery/pkg/runtime"
schema "k8s.io/apimachinery/pkg/runtime/schema"
serializer "k8s.io/apimachinery/pkg/runtime/serializer"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
)
var Scheme = runtime.NewScheme()
var Codecs = serializer.NewCodecFactory(Scheme)
var ParameterCodec = runtime.NewParameterCodec(Scheme)
var localSchemeBuilder = runtime.SchemeBuilder{
kubekeyv1alpha1.AddToScheme,
kubekeyv1alpha2.AddToScheme,
}
// AddToScheme adds all types of this clientset into the given scheme. This allows composition
// of clientsets, like in:
//
// import (
// "k8s.io/client-go/kubernetes"
// clientsetscheme "k8s.io/client-go/kubernetes/scheme"
// aggregatorclientsetscheme "k8s.io/kube-aggregator/pkg/client/clientset_generated/clientset/scheme"
// )
//
// kclientset, _ := kubernetes.NewForConfig(c)
// _ = aggregatorclientsetscheme.AddToScheme(clientsetscheme.Scheme)
//
// After this, RawExtensions in Kubernetes types will serialize kube-aggregator types
// correctly.
var AddToScheme = localSchemeBuilder.AddToScheme
func init() {
v1.AddToGroupVersion(Scheme, schema.GroupVersion{Version: "v1"})
utilruntime.Must(AddToScheme(Scheme))
}

View File

@@ -0,0 +1,183 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
package v1alpha1
import (
"context"
"time"
v1alpha1 "bytetrade.io/web3os/installer/apis/kubekey/v1alpha1"
scheme "bytetrade.io/web3os/installer/clients/clientset/versioned/scheme"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
types "k8s.io/apimachinery/pkg/types"
watch "k8s.io/apimachinery/pkg/watch"
rest "k8s.io/client-go/rest"
)
// ClustersGetter has a method to return a ClusterInterface.
// A group's client should implement this interface.
type ClustersGetter interface {
Clusters() ClusterInterface
}
// ClusterInterface has methods to work with Cluster resources.
type ClusterInterface interface {
Create(ctx context.Context, cluster *v1alpha1.Cluster, opts v1.CreateOptions) (*v1alpha1.Cluster, error)
Update(ctx context.Context, cluster *v1alpha1.Cluster, opts v1.UpdateOptions) (*v1alpha1.Cluster, error)
UpdateStatus(ctx context.Context, cluster *v1alpha1.Cluster, opts v1.UpdateOptions) (*v1alpha1.Cluster, error)
Delete(ctx context.Context, name string, opts v1.DeleteOptions) error
DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error
Get(ctx context.Context, name string, opts v1.GetOptions) (*v1alpha1.Cluster, error)
List(ctx context.Context, opts v1.ListOptions) (*v1alpha1.ClusterList, error)
Watch(ctx context.Context, opts v1.ListOptions) (watch.Interface, error)
Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts v1.PatchOptions, subresources ...string) (result *v1alpha1.Cluster, err error)
ClusterExpansion
}
// clusters implements ClusterInterface
type clusters struct {
client rest.Interface
}
// newClusters returns a Clusters
func newClusters(c *KubekeyV1alpha1Client) *clusters {
return &clusters{
client: c.RESTClient(),
}
}
// Get takes name of the cluster, and returns the corresponding cluster object, and an error if there is any.
func (c *clusters) Get(ctx context.Context, name string, options v1.GetOptions) (result *v1alpha1.Cluster, err error) {
result = &v1alpha1.Cluster{}
err = c.client.Get().
Resource("clusters").
Name(name).
VersionedParams(&options, scheme.ParameterCodec).
Do(ctx).
Into(result)
return
}
// List takes label and field selectors, and returns the list of Clusters that match those selectors.
func (c *clusters) List(ctx context.Context, opts v1.ListOptions) (result *v1alpha1.ClusterList, err error) {
var timeout time.Duration
if opts.TimeoutSeconds != nil {
timeout = time.Duration(*opts.TimeoutSeconds) * time.Second
}
result = &v1alpha1.ClusterList{}
err = c.client.Get().
Resource("clusters").
VersionedParams(&opts, scheme.ParameterCodec).
Timeout(timeout).
Do(ctx).
Into(result)
return
}
// Watch returns a watch.Interface that watches the requested clusters.
func (c *clusters) Watch(ctx context.Context, opts v1.ListOptions) (watch.Interface, error) {
var timeout time.Duration
if opts.TimeoutSeconds != nil {
timeout = time.Duration(*opts.TimeoutSeconds) * time.Second
}
opts.Watch = true
return c.client.Get().
Resource("clusters").
VersionedParams(&opts, scheme.ParameterCodec).
Timeout(timeout).
Watch(ctx)
}
// Create takes the representation of a cluster and creates it. Returns the server's representation of the cluster, and an error, if there is any.
func (c *clusters) Create(ctx context.Context, cluster *v1alpha1.Cluster, opts v1.CreateOptions) (result *v1alpha1.Cluster, err error) {
result = &v1alpha1.Cluster{}
err = c.client.Post().
Resource("clusters").
VersionedParams(&opts, scheme.ParameterCodec).
Body(cluster).
Do(ctx).
Into(result)
return
}
// Update takes the representation of a cluster and updates it. Returns the server's representation of the cluster, and an error, if there is any.
func (c *clusters) Update(ctx context.Context, cluster *v1alpha1.Cluster, opts v1.UpdateOptions) (result *v1alpha1.Cluster, err error) {
result = &v1alpha1.Cluster{}
err = c.client.Put().
Resource("clusters").
Name(cluster.Name).
VersionedParams(&opts, scheme.ParameterCodec).
Body(cluster).
Do(ctx).
Into(result)
return
}
// UpdateStatus was generated because the type contains a Status member.
// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus().
func (c *clusters) UpdateStatus(ctx context.Context, cluster *v1alpha1.Cluster, opts v1.UpdateOptions) (result *v1alpha1.Cluster, err error) {
result = &v1alpha1.Cluster{}
err = c.client.Put().
Resource("clusters").
Name(cluster.Name).
SubResource("status").
VersionedParams(&opts, scheme.ParameterCodec).
Body(cluster).
Do(ctx).
Into(result)
return
}
// Delete takes name of the cluster and deletes it. Returns an error if one occurs.
func (c *clusters) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error {
return c.client.Delete().
Resource("clusters").
Name(name).
Body(&opts).
Do(ctx).
Error()
}
// DeleteCollection deletes a collection of objects.
func (c *clusters) DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error {
var timeout time.Duration
if listOpts.TimeoutSeconds != nil {
timeout = time.Duration(*listOpts.TimeoutSeconds) * time.Second
}
return c.client.Delete().
Resource("clusters").
VersionedParams(&listOpts, scheme.ParameterCodec).
Timeout(timeout).
Body(&opts).
Do(ctx).
Error()
}
// Patch applies the patch and returns the patched cluster.
func (c *clusters) Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts v1.PatchOptions, subresources ...string) (result *v1alpha1.Cluster, err error) {
result = &v1alpha1.Cluster{}
err = c.client.Patch(pt).
Resource("clusters").
Name(name).
SubResource(subresources...).
VersionedParams(&opts, scheme.ParameterCodec).
Body(data).
Do(ctx).
Into(result)
return
}

View File

@@ -0,0 +1,19 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
// This package has the automatically generated typed clients.
package v1alpha1

View File

@@ -0,0 +1,19 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
// Package fake has the automatically generated clients.
package fake

View File

@@ -0,0 +1,132 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
package fake
import (
"context"
v1alpha1 "bytetrade.io/web3os/installer/apis/kubekey/v1alpha1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
labels "k8s.io/apimachinery/pkg/labels"
schema "k8s.io/apimachinery/pkg/runtime/schema"
types "k8s.io/apimachinery/pkg/types"
watch "k8s.io/apimachinery/pkg/watch"
testing "k8s.io/client-go/testing"
)
// FakeClusters implements ClusterInterface
type FakeClusters struct {
Fake *FakeKubekeyV1alpha1
}
var clustersResource = schema.GroupVersionResource{Group: "kubekey", Version: "v1alpha1", Resource: "clusters"}
var clustersKind = schema.GroupVersionKind{Group: "kubekey", Version: "v1alpha1", Kind: "Cluster"}
// Get takes name of the cluster, and returns the corresponding cluster object, and an error if there is any.
func (c *FakeClusters) Get(ctx context.Context, name string, options v1.GetOptions) (result *v1alpha1.Cluster, err error) {
obj, err := c.Fake.
Invokes(testing.NewRootGetAction(clustersResource, name), &v1alpha1.Cluster{})
if obj == nil {
return nil, err
}
return obj.(*v1alpha1.Cluster), err
}
// List takes label and field selectors, and returns the list of Clusters that match those selectors.
func (c *FakeClusters) List(ctx context.Context, opts v1.ListOptions) (result *v1alpha1.ClusterList, err error) {
obj, err := c.Fake.
Invokes(testing.NewRootListAction(clustersResource, clustersKind, opts), &v1alpha1.ClusterList{})
if obj == nil {
return nil, err
}
label, _, _ := testing.ExtractFromListOptions(opts)
if label == nil {
label = labels.Everything()
}
list := &v1alpha1.ClusterList{ListMeta: obj.(*v1alpha1.ClusterList).ListMeta}
for _, item := range obj.(*v1alpha1.ClusterList).Items {
if label.Matches(labels.Set(item.Labels)) {
list.Items = append(list.Items, item)
}
}
return list, err
}
// Watch returns a watch.Interface that watches the requested clusters.
func (c *FakeClusters) Watch(ctx context.Context, opts v1.ListOptions) (watch.Interface, error) {
return c.Fake.
InvokesWatch(testing.NewRootWatchAction(clustersResource, opts))
}
// Create takes the representation of a cluster and creates it. Returns the server's representation of the cluster, and an error, if there is any.
func (c *FakeClusters) Create(ctx context.Context, cluster *v1alpha1.Cluster, opts v1.CreateOptions) (result *v1alpha1.Cluster, err error) {
obj, err := c.Fake.
Invokes(testing.NewRootCreateAction(clustersResource, cluster), &v1alpha1.Cluster{})
if obj == nil {
return nil, err
}
return obj.(*v1alpha1.Cluster), err
}
// Update takes the representation of a cluster and updates it. Returns the server's representation of the cluster, and an error, if there is any.
func (c *FakeClusters) Update(ctx context.Context, cluster *v1alpha1.Cluster, opts v1.UpdateOptions) (result *v1alpha1.Cluster, err error) {
obj, err := c.Fake.
Invokes(testing.NewRootUpdateAction(clustersResource, cluster), &v1alpha1.Cluster{})
if obj == nil {
return nil, err
}
return obj.(*v1alpha1.Cluster), err
}
// UpdateStatus was generated because the type contains a Status member.
// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus().
func (c *FakeClusters) UpdateStatus(ctx context.Context, cluster *v1alpha1.Cluster, opts v1.UpdateOptions) (*v1alpha1.Cluster, error) {
obj, err := c.Fake.
Invokes(testing.NewRootUpdateSubresourceAction(clustersResource, "status", cluster), &v1alpha1.Cluster{})
if obj == nil {
return nil, err
}
return obj.(*v1alpha1.Cluster), err
}
// Delete takes name of the cluster and deletes it. Returns an error if one occurs.
func (c *FakeClusters) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error {
_, err := c.Fake.
Invokes(testing.NewRootDeleteAction(clustersResource, name), &v1alpha1.Cluster{})
return err
}
// DeleteCollection deletes a collection of objects.
func (c *FakeClusters) DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error {
action := testing.NewRootDeleteCollectionAction(clustersResource, listOpts)
_, err := c.Fake.Invokes(action, &v1alpha1.ClusterList{})
return err
}
// Patch applies the patch and returns the patched cluster.
func (c *FakeClusters) Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts v1.PatchOptions, subresources ...string) (result *v1alpha1.Cluster, err error) {
obj, err := c.Fake.
Invokes(testing.NewRootPatchSubresourceAction(clustersResource, name, pt, data, subresources...), &v1alpha1.Cluster{})
if obj == nil {
return nil, err
}
return obj.(*v1alpha1.Cluster), err
}

View File

@@ -0,0 +1,39 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
package fake
import (
v1alpha1 "bytetrade.io/web3os/installer/clients/clientset/versioned/typed/kubekey/v1alpha1"
rest "k8s.io/client-go/rest"
testing "k8s.io/client-go/testing"
)
type FakeKubekeyV1alpha1 struct {
*testing.Fake
}
func (c *FakeKubekeyV1alpha1) Clusters() v1alpha1.ClusterInterface {
return &FakeClusters{c}
}
// RESTClient returns a RESTClient that is used to communicate
// with API server by this client implementation.
func (c *FakeKubekeyV1alpha1) RESTClient() rest.Interface {
var ret *rest.RESTClient
return ret
}

View File

@@ -0,0 +1,20 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
package v1alpha1
type ClusterExpansion interface{}

View File

@@ -0,0 +1,88 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
package v1alpha1
import (
v1alpha1 "bytetrade.io/web3os/installer/apis/kubekey/v1alpha1"
"bytetrade.io/web3os/installer/clients/clientset/versioned/scheme"
rest "k8s.io/client-go/rest"
)
type KubekeyV1alpha1Interface interface {
RESTClient() rest.Interface
ClustersGetter
}
// KubekeyV1alpha1Client is used to interact with features provided by the kubekey group.
type KubekeyV1alpha1Client struct {
restClient rest.Interface
}
func (c *KubekeyV1alpha1Client) Clusters() ClusterInterface {
return newClusters(c)
}
// NewForConfig creates a new KubekeyV1alpha1Client for the given config.
func NewForConfig(c *rest.Config) (*KubekeyV1alpha1Client, error) {
config := *c
if err := setConfigDefaults(&config); err != nil {
return nil, err
}
client, err := rest.RESTClientFor(&config)
if err != nil {
return nil, err
}
return &KubekeyV1alpha1Client{client}, nil
}
// NewForConfigOrDie creates a new KubekeyV1alpha1Client for the given config and
// panics if there is an error in the config.
func NewForConfigOrDie(c *rest.Config) *KubekeyV1alpha1Client {
client, err := NewForConfig(c)
if err != nil {
panic(err)
}
return client
}
// New creates a new KubekeyV1alpha1Client for the given RESTClient.
func New(c rest.Interface) *KubekeyV1alpha1Client {
return &KubekeyV1alpha1Client{c}
}
func setConfigDefaults(config *rest.Config) error {
gv := v1alpha1.GroupVersion
config.GroupVersion = &gv
config.APIPath = "/apis"
config.NegotiatedSerializer = scheme.Codecs.WithoutConversion()
if config.UserAgent == "" {
config.UserAgent = rest.DefaultKubernetesUserAgent()
}
return nil
}
// RESTClient returns a RESTClient that is used to communicate
// with API server by this client implementation.
func (c *KubekeyV1alpha1Client) RESTClient() rest.Interface {
if c == nil {
return nil
}
return c.restClient
}

View File

@@ -0,0 +1,183 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
package v1alpha2
import (
"context"
"time"
v1alpha2 "bytetrade.io/web3os/installer/apis/kubekey/v1alpha2"
scheme "bytetrade.io/web3os/installer/clients/clientset/versioned/scheme"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
types "k8s.io/apimachinery/pkg/types"
watch "k8s.io/apimachinery/pkg/watch"
rest "k8s.io/client-go/rest"
)
// ClustersGetter has a method to return a ClusterInterface.
// A group's client should implement this interface.
type ClustersGetter interface {
Clusters() ClusterInterface
}
// ClusterInterface has methods to work with Cluster resources.
type ClusterInterface interface {
Create(ctx context.Context, cluster *v1alpha2.Cluster, opts v1.CreateOptions) (*v1alpha2.Cluster, error)
Update(ctx context.Context, cluster *v1alpha2.Cluster, opts v1.UpdateOptions) (*v1alpha2.Cluster, error)
UpdateStatus(ctx context.Context, cluster *v1alpha2.Cluster, opts v1.UpdateOptions) (*v1alpha2.Cluster, error)
Delete(ctx context.Context, name string, opts v1.DeleteOptions) error
DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error
Get(ctx context.Context, name string, opts v1.GetOptions) (*v1alpha2.Cluster, error)
List(ctx context.Context, opts v1.ListOptions) (*v1alpha2.ClusterList, error)
Watch(ctx context.Context, opts v1.ListOptions) (watch.Interface, error)
Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts v1.PatchOptions, subresources ...string) (result *v1alpha2.Cluster, err error)
ClusterExpansion
}
// clusters implements ClusterInterface
type clusters struct {
client rest.Interface
}
// newClusters returns a Clusters
func newClusters(c *KubekeyV1alpha2Client) *clusters {
return &clusters{
client: c.RESTClient(),
}
}
// Get takes name of the cluster, and returns the corresponding cluster object, and an error if there is any.
func (c *clusters) Get(ctx context.Context, name string, options v1.GetOptions) (result *v1alpha2.Cluster, err error) {
result = &v1alpha2.Cluster{}
err = c.client.Get().
Resource("clusters").
Name(name).
VersionedParams(&options, scheme.ParameterCodec).
Do(ctx).
Into(result)
return
}
// List takes label and field selectors, and returns the list of Clusters that match those selectors.
func (c *clusters) List(ctx context.Context, opts v1.ListOptions) (result *v1alpha2.ClusterList, err error) {
var timeout time.Duration
if opts.TimeoutSeconds != nil {
timeout = time.Duration(*opts.TimeoutSeconds) * time.Second
}
result = &v1alpha2.ClusterList{}
err = c.client.Get().
Resource("clusters").
VersionedParams(&opts, scheme.ParameterCodec).
Timeout(timeout).
Do(ctx).
Into(result)
return
}
// Watch returns a watch.Interface that watches the requested clusters.
func (c *clusters) Watch(ctx context.Context, opts v1.ListOptions) (watch.Interface, error) {
var timeout time.Duration
if opts.TimeoutSeconds != nil {
timeout = time.Duration(*opts.TimeoutSeconds) * time.Second
}
opts.Watch = true
return c.client.Get().
Resource("clusters").
VersionedParams(&opts, scheme.ParameterCodec).
Timeout(timeout).
Watch(ctx)
}
// Create takes the representation of a cluster and creates it. Returns the server's representation of the cluster, and an error, if there is any.
func (c *clusters) Create(ctx context.Context, cluster *v1alpha2.Cluster, opts v1.CreateOptions) (result *v1alpha2.Cluster, err error) {
result = &v1alpha2.Cluster{}
err = c.client.Post().
Resource("clusters").
VersionedParams(&opts, scheme.ParameterCodec).
Body(cluster).
Do(ctx).
Into(result)
return
}
// Update takes the representation of a cluster and updates it. Returns the server's representation of the cluster, and an error, if there is any.
func (c *clusters) Update(ctx context.Context, cluster *v1alpha2.Cluster, opts v1.UpdateOptions) (result *v1alpha2.Cluster, err error) {
result = &v1alpha2.Cluster{}
err = c.client.Put().
Resource("clusters").
Name(cluster.Name).
VersionedParams(&opts, scheme.ParameterCodec).
Body(cluster).
Do(ctx).
Into(result)
return
}
// UpdateStatus was generated because the type contains a Status member.
// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus().
func (c *clusters) UpdateStatus(ctx context.Context, cluster *v1alpha2.Cluster, opts v1.UpdateOptions) (result *v1alpha2.Cluster, err error) {
result = &v1alpha2.Cluster{}
err = c.client.Put().
Resource("clusters").
Name(cluster.Name).
SubResource("status").
VersionedParams(&opts, scheme.ParameterCodec).
Body(cluster).
Do(ctx).
Into(result)
return
}
// Delete takes name of the cluster and deletes it. Returns an error if one occurs.
func (c *clusters) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error {
return c.client.Delete().
Resource("clusters").
Name(name).
Body(&opts).
Do(ctx).
Error()
}
// DeleteCollection deletes a collection of objects.
func (c *clusters) DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error {
var timeout time.Duration
if listOpts.TimeoutSeconds != nil {
timeout = time.Duration(*listOpts.TimeoutSeconds) * time.Second
}
return c.client.Delete().
Resource("clusters").
VersionedParams(&listOpts, scheme.ParameterCodec).
Timeout(timeout).
Body(&opts).
Do(ctx).
Error()
}
// Patch applies the patch and returns the patched cluster.
func (c *clusters) Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts v1.PatchOptions, subresources ...string) (result *v1alpha2.Cluster, err error) {
result = &v1alpha2.Cluster{}
err = c.client.Patch(pt).
Resource("clusters").
Name(name).
SubResource(subresources...).
VersionedParams(&opts, scheme.ParameterCodec).
Body(data).
Do(ctx).
Into(result)
return
}

View File

@@ -0,0 +1,19 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
// This package has the automatically generated typed clients.
package v1alpha2

View File

@@ -0,0 +1,19 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
// Package fake has the automatically generated clients.
package fake

View File

@@ -0,0 +1,132 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
package fake
import (
"context"
v1alpha2 "bytetrade.io/web3os/installer/apis/kubekey/v1alpha2"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
labels "k8s.io/apimachinery/pkg/labels"
schema "k8s.io/apimachinery/pkg/runtime/schema"
types "k8s.io/apimachinery/pkg/types"
watch "k8s.io/apimachinery/pkg/watch"
testing "k8s.io/client-go/testing"
)
// FakeClusters implements ClusterInterface
type FakeClusters struct {
Fake *FakeKubekeyV1alpha2
}
var clustersResource = schema.GroupVersionResource{Group: "kubekey", Version: "v1alpha2", Resource: "clusters"}
var clustersKind = schema.GroupVersionKind{Group: "kubekey", Version: "v1alpha2", Kind: "Cluster"}
// Get takes name of the cluster, and returns the corresponding cluster object, and an error if there is any.
func (c *FakeClusters) Get(ctx context.Context, name string, options v1.GetOptions) (result *v1alpha2.Cluster, err error) {
obj, err := c.Fake.
Invokes(testing.NewRootGetAction(clustersResource, name), &v1alpha2.Cluster{})
if obj == nil {
return nil, err
}
return obj.(*v1alpha2.Cluster), err
}
// List takes label and field selectors, and returns the list of Clusters that match those selectors.
func (c *FakeClusters) List(ctx context.Context, opts v1.ListOptions) (result *v1alpha2.ClusterList, err error) {
obj, err := c.Fake.
Invokes(testing.NewRootListAction(clustersResource, clustersKind, opts), &v1alpha2.ClusterList{})
if obj == nil {
return nil, err
}
label, _, _ := testing.ExtractFromListOptions(opts)
if label == nil {
label = labels.Everything()
}
list := &v1alpha2.ClusterList{ListMeta: obj.(*v1alpha2.ClusterList).ListMeta}
for _, item := range obj.(*v1alpha2.ClusterList).Items {
if label.Matches(labels.Set(item.Labels)) {
list.Items = append(list.Items, item)
}
}
return list, err
}
// Watch returns a watch.Interface that watches the requested clusters.
func (c *FakeClusters) Watch(ctx context.Context, opts v1.ListOptions) (watch.Interface, error) {
return c.Fake.
InvokesWatch(testing.NewRootWatchAction(clustersResource, opts))
}
// Create takes the representation of a cluster and creates it. Returns the server's representation of the cluster, and an error, if there is any.
func (c *FakeClusters) Create(ctx context.Context, cluster *v1alpha2.Cluster, opts v1.CreateOptions) (result *v1alpha2.Cluster, err error) {
obj, err := c.Fake.
Invokes(testing.NewRootCreateAction(clustersResource, cluster), &v1alpha2.Cluster{})
if obj == nil {
return nil, err
}
return obj.(*v1alpha2.Cluster), err
}
// Update takes the representation of a cluster and updates it. Returns the server's representation of the cluster, and an error, if there is any.
func (c *FakeClusters) Update(ctx context.Context, cluster *v1alpha2.Cluster, opts v1.UpdateOptions) (result *v1alpha2.Cluster, err error) {
obj, err := c.Fake.
Invokes(testing.NewRootUpdateAction(clustersResource, cluster), &v1alpha2.Cluster{})
if obj == nil {
return nil, err
}
return obj.(*v1alpha2.Cluster), err
}
// UpdateStatus was generated because the type contains a Status member.
// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus().
func (c *FakeClusters) UpdateStatus(ctx context.Context, cluster *v1alpha2.Cluster, opts v1.UpdateOptions) (*v1alpha2.Cluster, error) {
obj, err := c.Fake.
Invokes(testing.NewRootUpdateSubresourceAction(clustersResource, "status", cluster), &v1alpha2.Cluster{})
if obj == nil {
return nil, err
}
return obj.(*v1alpha2.Cluster), err
}
// Delete takes name of the cluster and deletes it. Returns an error if one occurs.
func (c *FakeClusters) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error {
_, err := c.Fake.
Invokes(testing.NewRootDeleteAction(clustersResource, name), &v1alpha2.Cluster{})
return err
}
// DeleteCollection deletes a collection of objects.
func (c *FakeClusters) DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error {
action := testing.NewRootDeleteCollectionAction(clustersResource, listOpts)
_, err := c.Fake.Invokes(action, &v1alpha2.ClusterList{})
return err
}
// Patch applies the patch and returns the patched cluster.
func (c *FakeClusters) Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts v1.PatchOptions, subresources ...string) (result *v1alpha2.Cluster, err error) {
obj, err := c.Fake.
Invokes(testing.NewRootPatchSubresourceAction(clustersResource, name, pt, data, subresources...), &v1alpha2.Cluster{})
if obj == nil {
return nil, err
}
return obj.(*v1alpha2.Cluster), err
}

View File

@@ -0,0 +1,39 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
package fake
import (
v1alpha2 "bytetrade.io/web3os/installer/clients/clientset/versioned/typed/kubekey/v1alpha2"
rest "k8s.io/client-go/rest"
testing "k8s.io/client-go/testing"
)
type FakeKubekeyV1alpha2 struct {
*testing.Fake
}
func (c *FakeKubekeyV1alpha2) Clusters() v1alpha2.ClusterInterface {
return &FakeClusters{c}
}
// RESTClient returns a RESTClient that is used to communicate
// with API server by this client implementation.
func (c *FakeKubekeyV1alpha2) RESTClient() rest.Interface {
var ret *rest.RESTClient
return ret
}

View File

@@ -0,0 +1,20 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
package v1alpha2
type ClusterExpansion interface{}

View File

@@ -0,0 +1,88 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
package v1alpha2
import (
v1alpha2 "bytetrade.io/web3os/installer/apis/kubekey/v1alpha2"
"bytetrade.io/web3os/installer/clients/clientset/versioned/scheme"
rest "k8s.io/client-go/rest"
)
type KubekeyV1alpha2Interface interface {
RESTClient() rest.Interface
ClustersGetter
}
// KubekeyV1alpha2Client is used to interact with features provided by the kubekey group.
type KubekeyV1alpha2Client struct {
restClient rest.Interface
}
func (c *KubekeyV1alpha2Client) Clusters() ClusterInterface {
return newClusters(c)
}
// NewForConfig creates a new KubekeyV1alpha2Client for the given config.
func NewForConfig(c *rest.Config) (*KubekeyV1alpha2Client, error) {
config := *c
if err := setConfigDefaults(&config); err != nil {
return nil, err
}
client, err := rest.RESTClientFor(&config)
if err != nil {
return nil, err
}
return &KubekeyV1alpha2Client{client}, nil
}
// NewForConfigOrDie creates a new KubekeyV1alpha2Client for the given config and
// panics if there is an error in the config.
func NewForConfigOrDie(c *rest.Config) *KubekeyV1alpha2Client {
client, err := NewForConfig(c)
if err != nil {
panic(err)
}
return client
}
// New creates a new KubekeyV1alpha2Client for the given RESTClient.
func New(c rest.Interface) *KubekeyV1alpha2Client {
return &KubekeyV1alpha2Client{c}
}
func setConfigDefaults(config *rest.Config) error {
gv := v1alpha2.GroupVersion
config.GroupVersion = &gv
config.APIPath = "/apis"
config.NegotiatedSerializer = scheme.Codecs.WithoutConversion()
if config.UserAgent == "" {
config.UserAgent = rest.DefaultKubernetesUserAgent()
}
return nil
}
// RESTClient returns a RESTClient that is used to communicate
// with API server by this client implementation.
func (c *KubekeyV1alpha2Client) RESTClient() rest.Interface {
if c == nil {
return nil
}
return c.restClient
}

View File

@@ -0,0 +1,179 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by informer-gen. DO NOT EDIT.
package externalversions
import (
reflect "reflect"
sync "sync"
time "time"
versioned "bytetrade.io/web3os/installer/clients/clientset/versioned"
internalinterfaces "bytetrade.io/web3os/installer/clients/informers/externalversions/internalinterfaces"
kubekey "bytetrade.io/web3os/installer/clients/informers/externalversions/kubekey"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
runtime "k8s.io/apimachinery/pkg/runtime"
schema "k8s.io/apimachinery/pkg/runtime/schema"
cache "k8s.io/client-go/tools/cache"
)
// SharedInformerOption defines the functional option type for SharedInformerFactory.
type SharedInformerOption func(*sharedInformerFactory) *sharedInformerFactory
type sharedInformerFactory struct {
client versioned.Interface
namespace string
tweakListOptions internalinterfaces.TweakListOptionsFunc
lock sync.Mutex
defaultResync time.Duration
customResync map[reflect.Type]time.Duration
informers map[reflect.Type]cache.SharedIndexInformer
// startedInformers is used for tracking which informers have been started.
// This allows Start() to be called multiple times safely.
startedInformers map[reflect.Type]bool
}
// WithCustomResyncConfig sets a custom resync period for the specified informer types.
func WithCustomResyncConfig(resyncConfig map[v1.Object]time.Duration) SharedInformerOption {
return func(factory *sharedInformerFactory) *sharedInformerFactory {
for k, v := range resyncConfig {
factory.customResync[reflect.TypeOf(k)] = v
}
return factory
}
}
// WithTweakListOptions sets a custom filter on all listers of the configured SharedInformerFactory.
func WithTweakListOptions(tweakListOptions internalinterfaces.TweakListOptionsFunc) SharedInformerOption {
return func(factory *sharedInformerFactory) *sharedInformerFactory {
factory.tweakListOptions = tweakListOptions
return factory
}
}
// WithNamespace limits the SharedInformerFactory to the specified namespace.
func WithNamespace(namespace string) SharedInformerOption {
return func(factory *sharedInformerFactory) *sharedInformerFactory {
factory.namespace = namespace
return factory
}
}
// NewSharedInformerFactory constructs a new instance of sharedInformerFactory for all namespaces.
func NewSharedInformerFactory(client versioned.Interface, defaultResync time.Duration) SharedInformerFactory {
return NewSharedInformerFactoryWithOptions(client, defaultResync)
}
// NewFilteredSharedInformerFactory constructs a new instance of sharedInformerFactory.
// Listers obtained via this SharedInformerFactory will be subject to the same filters
// as specified here.
// Deprecated: Please use NewSharedInformerFactoryWithOptions instead
func NewFilteredSharedInformerFactory(client versioned.Interface, defaultResync time.Duration, namespace string, tweakListOptions internalinterfaces.TweakListOptionsFunc) SharedInformerFactory {
return NewSharedInformerFactoryWithOptions(client, defaultResync, WithNamespace(namespace), WithTweakListOptions(tweakListOptions))
}
// NewSharedInformerFactoryWithOptions constructs a new instance of a SharedInformerFactory with additional options.
func NewSharedInformerFactoryWithOptions(client versioned.Interface, defaultResync time.Duration, options ...SharedInformerOption) SharedInformerFactory {
factory := &sharedInformerFactory{
client: client,
namespace: v1.NamespaceAll,
defaultResync: defaultResync,
informers: make(map[reflect.Type]cache.SharedIndexInformer),
startedInformers: make(map[reflect.Type]bool),
customResync: make(map[reflect.Type]time.Duration),
}
// Apply all options
for _, opt := range options {
factory = opt(factory)
}
return factory
}
// Start initializes all requested informers.
func (f *sharedInformerFactory) Start(stopCh <-chan struct{}) {
f.lock.Lock()
defer f.lock.Unlock()
for informerType, informer := range f.informers {
if !f.startedInformers[informerType] {
go informer.Run(stopCh)
f.startedInformers[informerType] = true
}
}
}
// WaitForCacheSync waits for all started informers' cache were synced.
func (f *sharedInformerFactory) WaitForCacheSync(stopCh <-chan struct{}) map[reflect.Type]bool {
informers := func() map[reflect.Type]cache.SharedIndexInformer {
f.lock.Lock()
defer f.lock.Unlock()
informers := map[reflect.Type]cache.SharedIndexInformer{}
for informerType, informer := range f.informers {
if f.startedInformers[informerType] {
informers[informerType] = informer
}
}
return informers
}()
res := map[reflect.Type]bool{}
for informType, informer := range informers {
res[informType] = cache.WaitForCacheSync(stopCh, informer.HasSynced)
}
return res
}
// InternalInformerFor returns the SharedIndexInformer for obj using an internal
// client.
func (f *sharedInformerFactory) InformerFor(obj runtime.Object, newFunc internalinterfaces.NewInformerFunc) cache.SharedIndexInformer {
f.lock.Lock()
defer f.lock.Unlock()
informerType := reflect.TypeOf(obj)
informer, exists := f.informers[informerType]
if exists {
return informer
}
resyncPeriod, exists := f.customResync[informerType]
if !exists {
resyncPeriod = f.defaultResync
}
informer = newFunc(f.client, resyncPeriod)
f.informers[informerType] = informer
return informer
}
// SharedInformerFactory provides shared informers for resources in all known
// API group versions.
type SharedInformerFactory interface {
internalinterfaces.SharedInformerFactory
ForResource(resource schema.GroupVersionResource) (GenericInformer, error)
WaitForCacheSync(stopCh <-chan struct{}) map[reflect.Type]bool
Kubekey() kubekey.Interface
}
func (f *sharedInformerFactory) Kubekey() kubekey.Interface {
return kubekey.New(f, f.namespace, f.tweakListOptions)
}

View File

@@ -0,0 +1,66 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by informer-gen. DO NOT EDIT.
package externalversions
import (
"fmt"
v1alpha1 "bytetrade.io/web3os/installer/apis/kubekey/v1alpha1"
v1alpha2 "bytetrade.io/web3os/installer/apis/kubekey/v1alpha2"
schema "k8s.io/apimachinery/pkg/runtime/schema"
cache "k8s.io/client-go/tools/cache"
)
// GenericInformer is type of SharedIndexInformer which will locate and delegate to other
// sharedInformers based on type
type GenericInformer interface {
Informer() cache.SharedIndexInformer
Lister() cache.GenericLister
}
type genericInformer struct {
informer cache.SharedIndexInformer
resource schema.GroupResource
}
// Informer returns the SharedIndexInformer.
func (f *genericInformer) Informer() cache.SharedIndexInformer {
return f.informer
}
// Lister returns the GenericLister.
func (f *genericInformer) Lister() cache.GenericLister {
return cache.NewGenericLister(f.Informer().GetIndexer(), f.resource)
}
// ForResource gives generic access to a shared informer of the matching type
// TODO extend this to unknown resources with a client pool
func (f *sharedInformerFactory) ForResource(resource schema.GroupVersionResource) (GenericInformer, error) {
switch resource {
// Group=kubekey, Version=v1alpha1
case v1alpha1.SchemeGroupVersion.WithResource("clusters"):
return &genericInformer{resource: resource.GroupResource(), informer: f.Kubekey().V1alpha1().Clusters().Informer()}, nil
// Group=kubekey, Version=v1alpha2
case v1alpha2.SchemeGroupVersion.WithResource("clusters"):
return &genericInformer{resource: resource.GroupResource(), informer: f.Kubekey().V1alpha2().Clusters().Informer()}, nil
}
return nil, fmt.Errorf("no informer found for %v", resource)
}

View File

@@ -0,0 +1,39 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by informer-gen. DO NOT EDIT.
package internalinterfaces
import (
time "time"
versioned "bytetrade.io/web3os/installer/clients/clientset/versioned"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
runtime "k8s.io/apimachinery/pkg/runtime"
cache "k8s.io/client-go/tools/cache"
)
// NewInformerFunc takes versioned.Interface and time.Duration to return a SharedIndexInformer.
type NewInformerFunc func(versioned.Interface, time.Duration) cache.SharedIndexInformer
// SharedInformerFactory a small interface to allow for adding an informer without an import cycle
type SharedInformerFactory interface {
Start(stopCh <-chan struct{})
InformerFor(obj runtime.Object, newFunc NewInformerFunc) cache.SharedIndexInformer
}
// TweakListOptionsFunc is a function that transforms a v1.ListOptions.
type TweakListOptionsFunc func(*v1.ListOptions)

View File

@@ -0,0 +1,53 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by informer-gen. DO NOT EDIT.
package kubekey
import (
internalinterfaces "bytetrade.io/web3os/installer/clients/informers/externalversions/internalinterfaces"
v1alpha1 "bytetrade.io/web3os/installer/clients/informers/externalversions/kubekey/v1alpha1"
v1alpha2 "bytetrade.io/web3os/installer/clients/informers/externalversions/kubekey/v1alpha2"
)
// Interface provides access to each of this group's versions.
type Interface interface {
// V1alpha1 provides access to shared informers for resources in V1alpha1.
V1alpha1() v1alpha1.Interface
// V1alpha2 provides access to shared informers for resources in V1alpha2.
V1alpha2() v1alpha2.Interface
}
type group struct {
factory internalinterfaces.SharedInformerFactory
namespace string
tweakListOptions internalinterfaces.TweakListOptionsFunc
}
// New returns a new Interface.
func New(f internalinterfaces.SharedInformerFactory, namespace string, tweakListOptions internalinterfaces.TweakListOptionsFunc) Interface {
return &group{factory: f, namespace: namespace, tweakListOptions: tweakListOptions}
}
// V1alpha1 returns a new v1alpha1.Interface.
func (g *group) V1alpha1() v1alpha1.Interface {
return v1alpha1.New(g.factory, g.namespace, g.tweakListOptions)
}
// V1alpha2 returns a new v1alpha2.Interface.
func (g *group) V1alpha2() v1alpha2.Interface {
return v1alpha2.New(g.factory, g.namespace, g.tweakListOptions)
}

View File

@@ -0,0 +1,88 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by informer-gen. DO NOT EDIT.
package v1alpha1
import (
"context"
time "time"
kubekeyv1alpha1 "bytetrade.io/web3os/installer/apis/kubekey/v1alpha1"
versioned "bytetrade.io/web3os/installer/clients/clientset/versioned"
internalinterfaces "bytetrade.io/web3os/installer/clients/informers/externalversions/internalinterfaces"
v1alpha1 "bytetrade.io/web3os/installer/clients/listers/kubekey/v1alpha1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
runtime "k8s.io/apimachinery/pkg/runtime"
watch "k8s.io/apimachinery/pkg/watch"
cache "k8s.io/client-go/tools/cache"
)
// ClusterInformer provides access to a shared informer and lister for
// Clusters.
type ClusterInformer interface {
Informer() cache.SharedIndexInformer
Lister() v1alpha1.ClusterLister
}
type clusterInformer struct {
factory internalinterfaces.SharedInformerFactory
tweakListOptions internalinterfaces.TweakListOptionsFunc
}
// NewClusterInformer constructs a new informer for Cluster type.
// Always prefer using an informer factory to get a shared informer instead of getting an independent
// one. This reduces memory footprint and number of connections to the server.
func NewClusterInformer(client versioned.Interface, resyncPeriod time.Duration, indexers cache.Indexers) cache.SharedIndexInformer {
return NewFilteredClusterInformer(client, resyncPeriod, indexers, nil)
}
// NewFilteredClusterInformer constructs a new informer for Cluster type.
// Always prefer using an informer factory to get a shared informer instead of getting an independent
// one. This reduces memory footprint and number of connections to the server.
func NewFilteredClusterInformer(client versioned.Interface, resyncPeriod time.Duration, indexers cache.Indexers, tweakListOptions internalinterfaces.TweakListOptionsFunc) cache.SharedIndexInformer {
return cache.NewSharedIndexInformer(
&cache.ListWatch{
ListFunc: func(options v1.ListOptions) (runtime.Object, error) {
if tweakListOptions != nil {
tweakListOptions(&options)
}
return client.KubekeyV1alpha1().Clusters().List(context.TODO(), options)
},
WatchFunc: func(options v1.ListOptions) (watch.Interface, error) {
if tweakListOptions != nil {
tweakListOptions(&options)
}
return client.KubekeyV1alpha1().Clusters().Watch(context.TODO(), options)
},
},
&kubekeyv1alpha1.Cluster{},
resyncPeriod,
indexers,
)
}
func (f *clusterInformer) defaultInformer(client versioned.Interface, resyncPeriod time.Duration) cache.SharedIndexInformer {
return NewFilteredClusterInformer(client, resyncPeriod, cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc}, f.tweakListOptions)
}
func (f *clusterInformer) Informer() cache.SharedIndexInformer {
return f.factory.InformerFor(&kubekeyv1alpha1.Cluster{}, f.defaultInformer)
}
func (f *clusterInformer) Lister() v1alpha1.ClusterLister {
return v1alpha1.NewClusterLister(f.Informer().GetIndexer())
}

View File

@@ -0,0 +1,44 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by informer-gen. DO NOT EDIT.
package v1alpha1
import (
internalinterfaces "bytetrade.io/web3os/installer/clients/informers/externalversions/internalinterfaces"
)
// Interface provides access to all the informers in this group version.
type Interface interface {
// Clusters returns a ClusterInformer.
Clusters() ClusterInformer
}
type version struct {
factory internalinterfaces.SharedInformerFactory
namespace string
tweakListOptions internalinterfaces.TweakListOptionsFunc
}
// New returns a new Interface.
func New(f internalinterfaces.SharedInformerFactory, namespace string, tweakListOptions internalinterfaces.TweakListOptionsFunc) Interface {
return &version{factory: f, namespace: namespace, tweakListOptions: tweakListOptions}
}
// Clusters returns a ClusterInformer.
func (v *version) Clusters() ClusterInformer {
return &clusterInformer{factory: v.factory, tweakListOptions: v.tweakListOptions}
}

View File

@@ -0,0 +1,88 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by informer-gen. DO NOT EDIT.
package v1alpha2
import (
"context"
time "time"
kubekeyv1alpha2 "bytetrade.io/web3os/installer/apis/kubekey/v1alpha2"
versioned "bytetrade.io/web3os/installer/clients/clientset/versioned"
internalinterfaces "bytetrade.io/web3os/installer/clients/informers/externalversions/internalinterfaces"
v1alpha2 "bytetrade.io/web3os/installer/clients/listers/kubekey/v1alpha2"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
runtime "k8s.io/apimachinery/pkg/runtime"
watch "k8s.io/apimachinery/pkg/watch"
cache "k8s.io/client-go/tools/cache"
)
// ClusterInformer provides access to a shared informer and lister for
// Clusters.
type ClusterInformer interface {
Informer() cache.SharedIndexInformer
Lister() v1alpha2.ClusterLister
}
type clusterInformer struct {
factory internalinterfaces.SharedInformerFactory
tweakListOptions internalinterfaces.TweakListOptionsFunc
}
// NewClusterInformer constructs a new informer for Cluster type.
// Always prefer using an informer factory to get a shared informer instead of getting an independent
// one. This reduces memory footprint and number of connections to the server.
func NewClusterInformer(client versioned.Interface, resyncPeriod time.Duration, indexers cache.Indexers) cache.SharedIndexInformer {
return NewFilteredClusterInformer(client, resyncPeriod, indexers, nil)
}
// NewFilteredClusterInformer constructs a new informer for Cluster type.
// Always prefer using an informer factory to get a shared informer instead of getting an independent
// one. This reduces memory footprint and number of connections to the server.
func NewFilteredClusterInformer(client versioned.Interface, resyncPeriod time.Duration, indexers cache.Indexers, tweakListOptions internalinterfaces.TweakListOptionsFunc) cache.SharedIndexInformer {
return cache.NewSharedIndexInformer(
&cache.ListWatch{
ListFunc: func(options v1.ListOptions) (runtime.Object, error) {
if tweakListOptions != nil {
tweakListOptions(&options)
}
return client.KubekeyV1alpha2().Clusters().List(context.TODO(), options)
},
WatchFunc: func(options v1.ListOptions) (watch.Interface, error) {
if tweakListOptions != nil {
tweakListOptions(&options)
}
return client.KubekeyV1alpha2().Clusters().Watch(context.TODO(), options)
},
},
&kubekeyv1alpha2.Cluster{},
resyncPeriod,
indexers,
)
}
func (f *clusterInformer) defaultInformer(client versioned.Interface, resyncPeriod time.Duration) cache.SharedIndexInformer {
return NewFilteredClusterInformer(client, resyncPeriod, cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc}, f.tweakListOptions)
}
func (f *clusterInformer) Informer() cache.SharedIndexInformer {
return f.factory.InformerFor(&kubekeyv1alpha2.Cluster{}, f.defaultInformer)
}
func (f *clusterInformer) Lister() v1alpha2.ClusterLister {
return v1alpha2.NewClusterLister(f.Informer().GetIndexer())
}

View File

@@ -0,0 +1,44 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by informer-gen. DO NOT EDIT.
package v1alpha2
import (
internalinterfaces "bytetrade.io/web3os/installer/clients/informers/externalversions/internalinterfaces"
)
// Interface provides access to all the informers in this group version.
type Interface interface {
// Clusters returns a ClusterInformer.
Clusters() ClusterInformer
}
type version struct {
factory internalinterfaces.SharedInformerFactory
namespace string
tweakListOptions internalinterfaces.TweakListOptionsFunc
}
// New returns a new Interface.
func New(f internalinterfaces.SharedInformerFactory, namespace string, tweakListOptions internalinterfaces.TweakListOptionsFunc) Interface {
return &version{factory: f, namespace: namespace, tweakListOptions: tweakListOptions}
}
// Clusters returns a ClusterInformer.
func (v *version) Clusters() ClusterInformer {
return &clusterInformer{factory: v.factory, tweakListOptions: v.tweakListOptions}
}

View File

@@ -0,0 +1,67 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by lister-gen. DO NOT EDIT.
package v1alpha1
import (
v1alpha1 "bytetrade.io/web3os/installer/apis/kubekey/v1alpha1"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/client-go/tools/cache"
)
// ClusterLister helps list Clusters.
// All objects returned here must be treated as read-only.
type ClusterLister interface {
// List lists all Clusters in the indexer.
// Objects returned here must be treated as read-only.
List(selector labels.Selector) (ret []*v1alpha1.Cluster, err error)
// Get retrieves the Cluster from the index for a given name.
// Objects returned here must be treated as read-only.
Get(name string) (*v1alpha1.Cluster, error)
ClusterListerExpansion
}
// clusterLister implements the ClusterLister interface.
type clusterLister struct {
indexer cache.Indexer
}
// NewClusterLister returns a new ClusterLister.
func NewClusterLister(indexer cache.Indexer) ClusterLister {
return &clusterLister{indexer: indexer}
}
// List lists all Clusters in the indexer.
func (s *clusterLister) List(selector labels.Selector) (ret []*v1alpha1.Cluster, err error) {
err = cache.ListAll(s.indexer, selector, func(m interface{}) {
ret = append(ret, m.(*v1alpha1.Cluster))
})
return ret, err
}
// Get retrieves the Cluster from the index for a given name.
func (s *clusterLister) Get(name string) (*v1alpha1.Cluster, error) {
obj, exists, err := s.indexer.GetByKey(name)
if err != nil {
return nil, err
}
if !exists {
return nil, errors.NewNotFound(v1alpha1.Resource("cluster"), name)
}
return obj.(*v1alpha1.Cluster), nil
}

View File

@@ -0,0 +1,22 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by lister-gen. DO NOT EDIT.
package v1alpha1
// ClusterListerExpansion allows custom methods to be added to
// ClusterLister.
type ClusterListerExpansion interface{}

View File

@@ -0,0 +1,67 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by lister-gen. DO NOT EDIT.
package v1alpha2
import (
v1alpha2 "bytetrade.io/web3os/installer/apis/kubekey/v1alpha2"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/client-go/tools/cache"
)
// ClusterLister helps list Clusters.
// All objects returned here must be treated as read-only.
type ClusterLister interface {
// List lists all Clusters in the indexer.
// Objects returned here must be treated as read-only.
List(selector labels.Selector) (ret []*v1alpha2.Cluster, err error)
// Get retrieves the Cluster from the index for a given name.
// Objects returned here must be treated as read-only.
Get(name string) (*v1alpha2.Cluster, error)
ClusterListerExpansion
}
// clusterLister implements the ClusterLister interface.
type clusterLister struct {
indexer cache.Indexer
}
// NewClusterLister returns a new ClusterLister.
func NewClusterLister(indexer cache.Indexer) ClusterLister {
return &clusterLister{indexer: indexer}
}
// List lists all Clusters in the indexer.
func (s *clusterLister) List(selector labels.Selector) (ret []*v1alpha2.Cluster, err error) {
err = cache.ListAll(s.indexer, selector, func(m interface{}) {
ret = append(ret, m.(*v1alpha2.Cluster))
})
return ret, err
}
// Get retrieves the Cluster from the index for a given name.
func (s *clusterLister) Get(name string) (*v1alpha2.Cluster, error) {
obj, exists, err := s.indexer.GetByKey(name)
if err != nil {
return nil, err
}
if !exists {
return nil, errors.NewNotFound(v1alpha2.Resource("cluster"), name)
}
return obj.(*v1alpha2.Cluster), nil
}

View File

@@ -0,0 +1,22 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by lister-gen. DO NOT EDIT.
package v1alpha2
// ClusterListerExpansion allows custom methods to be added to
// ClusterLister.
type ClusterListerExpansion interface{}

View File

@@ -0,0 +1,21 @@
package gpu
import (
"log"
"bytetrade.io/web3os/installer/pkg/pipelines"
"github.com/spf13/cobra"
)
func NewCmdDisableGpu() *cobra.Command {
cmd := &cobra.Command{
Use: "disable",
Short: "Disable GPU drivers for Olares node",
Run: func(cmd *cobra.Command, args []string) {
if err := pipelines.DisableGpuNode(); err != nil {
log.Fatalf("error: %v", err)
}
},
}
return cmd
}

21
cli/cmd/ctl/gpu/enable.go Normal file
View File

@@ -0,0 +1,21 @@
package gpu
import (
"log"
"bytetrade.io/web3os/installer/pkg/pipelines"
"github.com/spf13/cobra"
)
func NewCmdEnableGpu() *cobra.Command {
cmd := &cobra.Command{
Use: "enable",
Short: "Enable GPU drivers for Olares node",
Run: func(cmd *cobra.Command, args []string) {
if err := pipelines.EnableGpuNode(); err != nil {
log.Fatalf("error: %v", err)
}
},
}
return cmd
}

View File

@@ -0,0 +1,24 @@
package gpu
import (
"log"
"bytetrade.io/web3os/installer/cmd/ctl/options"
"bytetrade.io/web3os/installer/pkg/pipelines"
"github.com/spf13/cobra"
)
func NewCmdInstallGpu() *cobra.Command {
o := options.NewInstallGpuOptions()
cmd := &cobra.Command{
Use: "install",
Short: "Install GPU drivers for Olares",
Run: func(cmd *cobra.Command, args []string) {
if err := pipelines.InstallGpuDrivers(o); err != nil {
log.Fatalf("error: %v", err)
}
},
}
o.AddFlags(cmd)
return cmd
}

20
cli/cmd/ctl/gpu/root.go Normal file
View File

@@ -0,0 +1,20 @@
package gpu
import (
"github.com/spf13/cobra"
)
func NewCmdGpu() *cobra.Command {
rootGpuCmd := &cobra.Command{
Use: "gpu",
Short: "install / uninstall / status / disbale gpu commands for olares.",
}
rootGpuCmd.AddCommand(NewCmdInstallGpu())
rootGpuCmd.AddCommand(NewCmdUninstallpu())
rootGpuCmd.AddCommand(NewCmdEnableGpu())
rootGpuCmd.AddCommand(NewCmdDisableGpu())
rootGpuCmd.AddCommand(NewCmdUpgradeGpu())
rootGpuCmd.AddCommand(NewCmdGpuStatus())
return rootGpuCmd
}

21
cli/cmd/ctl/gpu/status.go Normal file
View File

@@ -0,0 +1,21 @@
package gpu
import (
"log"
"bytetrade.io/web3os/installer/pkg/pipelines"
"github.com/spf13/cobra"
)
func NewCmdGpuStatus() *cobra.Command {
cmd := &cobra.Command{
Use: "status",
Short: "Print GPU drivers status",
Run: func(cmd *cobra.Command, args []string) {
if err := pipelines.GpuDriverStatus(); err != nil {
log.Fatalf("error: %v", err)
}
},
}
return cmd
}

View File

@@ -0,0 +1,21 @@
package gpu
import (
"log"
"bytetrade.io/web3os/installer/pkg/pipelines"
"github.com/spf13/cobra"
)
func NewCmdUninstallpu() *cobra.Command {
cmd := &cobra.Command{
Use: "uninstall",
Short: "uninstall GPU drivers for Olares",
Run: func(cmd *cobra.Command, args []string) {
if err := pipelines.UninstallGpuDrivers(); err != nil {
log.Fatalf("error: %v", err)
}
},
}
return cmd
}

View File

@@ -0,0 +1,24 @@
package gpu
import (
"log"
"bytetrade.io/web3os/installer/cmd/ctl/options"
"bytetrade.io/web3os/installer/pkg/pipelines"
"github.com/spf13/cobra"
)
func NewCmdUpgradeGpu() *cobra.Command {
o := options.NewInstallGpuOptions()
cmd := &cobra.Command{
Use: "upgrade",
Short: "upgrade GPU drivers for Olares",
Run: func(cmd *cobra.Command, args []string) {
if err := pipelines.UpgradeGpuDrivers(o); err != nil {
log.Fatalf("error: %v", err)
}
},
}
o.AddFlags(cmd)
return cmd
}

23
cli/cmd/ctl/node/add.go Normal file
View File

@@ -0,0 +1,23 @@
package node
import (
"bytetrade.io/web3os/installer/cmd/ctl/options"
"bytetrade.io/web3os/installer/pkg/pipelines"
"github.com/spf13/cobra"
"log"
)
func NewCmdAddNode() *cobra.Command {
o := options.NewAddNodeOptions()
cmd := &cobra.Command{
Use: "add",
Short: "add worker node to the cluster",
Run: func(cmd *cobra.Command, args []string) {
if err := pipelines.AddNodePipeline(o); err != nil {
log.Fatal(err)
}
},
}
o.AddFlags(cmd)
return cmd
}

View File

@@ -0,0 +1 @@
package node

View File

@@ -0,0 +1,23 @@
package node
import (
"bytetrade.io/web3os/installer/cmd/ctl/options"
"bytetrade.io/web3os/installer/pkg/pipelines"
"github.com/spf13/cobra"
"log"
)
func NewCmdMasterInfo() *cobra.Command {
o := options.NewMasterInfoOptions()
cmd := &cobra.Command{
Use: "masterinfo",
Short: "get information about master node, and check whether current node can be added to the cluster",
Run: func(cmd *cobra.Command, args []string) {
if err := pipelines.MasterInfoPipeline(o); err != nil {
log.Fatal(err)
}
},
}
o.AddFlags(cmd)
return cmd
}

13
cli/cmd/ctl/node/root.go Normal file
View File

@@ -0,0 +1,13 @@
package node
import "github.com/spf13/cobra"
func NewNodeCommand() *cobra.Command {
cmd := &cobra.Command{
Use: "node",
Short: "cluster node related operations",
}
cmd.AddCommand(NewCmdMasterInfo())
cmd.AddCommand(NewCmdAddNode())
return cmd
}

View File

@@ -0,0 +1,162 @@
package options
import (
"bytetrade.io/web3os/installer/pkg/common"
cc "bytetrade.io/web3os/installer/pkg/core/common"
"bytetrade.io/web3os/installer/pkg/phase/cluster"
"github.com/spf13/cobra"
)
type CliTerminusUninstallOptions struct {
Version string
BaseDir string
All bool
Phase string
Quiet bool
}
func NewCliTerminusUninstallOptions() *CliTerminusUninstallOptions {
return &CliTerminusUninstallOptions{}
}
func (o *CliTerminusUninstallOptions) AddFlags(cmd *cobra.Command) {
cmd.Flags().StringVarP(&o.Version, "version", "v", "", "Set Olares version, e.g., 1.10.0, 1.10.0-20241109")
cmd.Flags().StringVarP(&o.BaseDir, "base-dir", "b", "", "Set Olares package base dir, defaults to $HOME/"+cc.DefaultBaseDir)
cmd.Flags().BoolVar(&o.All, "all", false, "Uninstall Olares completely, including prepared dependencies")
cmd.Flags().StringVar(&o.Phase, "phase", cluster.PhaseInstall.String(), "Uninstall from a specified phase and revert to the previous one. For example, using --phase install will remove the tasks performed in the 'install' phase, effectively returning the system to the 'prepare' state.")
cmd.Flags().BoolVar(&o.Quiet, "quiet", false, "Quiet mode, default: false")
}
type CliTerminusInstallOptions struct {
Version string
KubeType string
WithJuiceFS bool
MiniKubeProfile string
BaseDir string
common.SwapConfig
}
func NewCliTerminusInstallOptions() *CliTerminusInstallOptions {
return &CliTerminusInstallOptions{}
}
func (o *CliTerminusInstallOptions) AddFlags(cmd *cobra.Command) {
cmd.Flags().StringVarP(&o.Version, "version", "v", "", "Set Olares version, e.g., 1.10.0, 1.10.0-20241109")
cmd.Flags().StringVar(&o.KubeType, "kube", "k3s", "Set kube type, e.g., k3s or k8s")
cmd.Flags().BoolVar(&o.WithJuiceFS, "with-juicefs", false, "Use JuiceFS as the rootfs for Olares workloads, rather than the local disk.")
cmd.Flags().StringVarP(&o.MiniKubeProfile, "profile", "p", "", "Set Minikube profile name, only in MacOS platform, defaults to "+common.MinikubeDefaultProfile)
cmd.Flags().StringVarP(&o.BaseDir, "base-dir", "b", "", "Set Olares package base dir, defaults to $HOME/"+cc.DefaultBaseDir)
(&o.SwapConfig).AddFlags(cmd.Flags())
}
type CliPrepareSystemOptions struct {
Version string
KubeType string
RegistryMirrors string
BaseDir string
MinikubeProfile string
}
func NewCliPrepareSystemOptions() *CliPrepareSystemOptions {
return &CliPrepareSystemOptions{}
}
func (o *CliPrepareSystemOptions) AddFlags(cmd *cobra.Command) {
cmd.Flags().StringVarP(&o.Version, "version", "v", "", "Set Olares version, e.g., 1.10.0, 1.10.0-20241109")
cmd.Flags().StringVar(&o.KubeType, "kube", "k3s", "Set kube type, e.g., k3s or k8s")
cmd.Flags().StringVarP(&o.RegistryMirrors, "registry-mirrors", "r", "", "Docker Container registry mirrors, multiple mirrors are separated by commas")
cmd.Flags().StringVarP(&o.BaseDir, "base-dir", "b", "", "Set Olares package base dir, defaults to $HOME/"+cc.DefaultBaseDir)
cmd.Flags().StringVarP(&o.MinikubeProfile, "profile", "p", "", "Set Minikube profile name, only in MacOS platform, defaults to "+common.MinikubeDefaultProfile)
}
type ChangeIPOptions struct {
Version string
BaseDir string
NewMasterHost string
WSLDistribution string
MinikubeProfile string
}
func NewChangeIPOptions() *ChangeIPOptions {
return &ChangeIPOptions{}
}
func (o *ChangeIPOptions) AddFlags(cmd *cobra.Command) {
cmd.Flags().StringVarP(&o.Version, "version", "v", "", "Set Olares version, e.g., 1.10.0, 1.10.0-20241109")
cmd.Flags().StringVarP(&o.BaseDir, "base-dir", "b", "", "Set Olares package base dir, defaults to $HOME/"+cc.DefaultBaseDir)
cmd.Flags().StringVar(&o.NewMasterHost, "new-master-host", "", "Update the master node's IP if it's changed, only in Linux worker node")
cmd.Flags().StringVarP(&o.WSLDistribution, "distribution", "d", "", "Set WSL distribution name, only in Windows platform, defaults to "+common.WSLDefaultDistribution)
cmd.Flags().StringVarP(&o.MinikubeProfile, "profile", "p", "", "Set Minikube profile name, only in MacOS platform, defaults to "+common.MinikubeDefaultProfile)
}
type PreCheckOptions struct {
Version string
BaseDir string
}
func NewPreCheckOptions() *PreCheckOptions {
return &PreCheckOptions{}
}
func (o *PreCheckOptions) AddFlags(cmd *cobra.Command) {
cmd.Flags().StringVarP(&o.Version, "version", "v", "", "Set Olares version, e.g., 1.10.0, 1.10.0-20241109")
cmd.Flags().StringVarP(&o.BaseDir, "base-dir", "b", "", "Set Olares package base dir, defaults to $HOME/"+cc.DefaultBaseDir)
}
type InstallStorageOptions struct {
Version string
BaseDir string
}
func NewInstallStorageOptions() *InstallStorageOptions {
return &InstallStorageOptions{}
}
func (o *InstallStorageOptions) AddFlags(cmd *cobra.Command) {
cmd.Flags().StringVarP(&o.Version, "version", "v", "", "Set Olares version, e.g., 1.10.0, 1.10.0-20241109")
cmd.Flags().StringVarP(&o.BaseDir, "base-dir", "b", "", "Set Olares package base dir, defaults to $HOME/"+cc.DefaultBaseDir)
}
type AddNodeOptions struct {
common.MasterHostConfig
Version string
BaseDir string
}
func NewAddNodeOptions() *AddNodeOptions {
return &AddNodeOptions{}
}
func (o *AddNodeOptions) AddFlags(cmd *cobra.Command) {
cmd.Flags().StringVarP(&o.Version, "version", "v", "", "Set Olares version, e.g., 1.10.0, 1.10.0-20241109")
cmd.Flags().StringVarP(&o.BaseDir, "base-dir", "b", "", "Set Olares package base dir, defaults to $HOME/"+cc.DefaultBaseDir)
(&o.MasterHostConfig).AddFlags(cmd.Flags())
}
type MasterInfoOptions struct {
BaseDir string
common.MasterHostConfig
}
func NewMasterInfoOptions() *MasterInfoOptions {
return &MasterInfoOptions{}
}
func (o *MasterInfoOptions) AddFlags(cmd *cobra.Command) {
cmd.Flags().StringVarP(&o.BaseDir, "base-dir", "b", "", "Set Olares package base dir, defaults to $HOME/"+cc.DefaultBaseDir)
(&o.MasterHostConfig).AddFlags(cmd.Flags())
}
type UpgradeOptions struct {
Version string
BaseDir string
}
func NewUpgradeOptions() *UpgradeOptions {
return &UpgradeOptions{}
}
func (o *UpgradeOptions) AddFlags(cmd *cobra.Command) {
cmd.Flags().StringVarP(&o.Version, "version", "v", "", "Set target Olares version to upgrade to, e.g., 1.10.0, 1.10.0-20241109")
cmd.Flags().StringVarP(&o.BaseDir, "base-dir", "b", "", "Set Olares package base dir, defaults to $HOME/"+cc.DefaultBaseDir)
}

View File

@@ -0,0 +1,44 @@
package options
import (
cc "bytetrade.io/web3os/installer/pkg/core/common"
"github.com/spf13/cobra"
)
type CliDownloadWizardOptions struct {
Version string
KubeType string
BaseDir string
DownloadCdnUrl string
}
func NewCliDownloadWizardOptions() *CliDownloadWizardOptions {
return &CliDownloadWizardOptions{}
}
func (o *CliDownloadWizardOptions) AddFlags(cmd *cobra.Command) {
cmd.Flags().StringVarP(&o.Version, "version", "v", "", "Set Olares version, e.g., 1.10.0, 1.10.0-20241109")
cmd.Flags().StringVarP(&o.BaseDir, "base-dir", "b", "", "Set Olares package base dir, defaults to $HOME/"+cc.DefaultBaseDir)
cmd.Flags().StringVar(&o.KubeType, "kube", "k3s", "Set kube type, e.g., k3s or k8s")
cmd.Flags().StringVar(&o.DownloadCdnUrl, "download-cdn-url", "", "Set the CDN accelerated download address in the format https://example.cdn.com. If not set, the default download address will be used")
}
type CliDownloadOptions struct {
Version string
KubeType string
Manifest string
BaseDir string
DownloadCdnUrl string
}
func NewCliDownloadOptions() *CliDownloadOptions {
return &CliDownloadOptions{}
}
func (o *CliDownloadOptions) AddFlags(cmd *cobra.Command) {
cmd.Flags().StringVarP(&o.Version, "version", "v", "", "Set Olares version, e.g., 1.10.0, 1.10.0-20241109")
cmd.Flags().StringVarP(&o.BaseDir, "base-dir", "b", "", "Set Olares package base dir , defaults to $HOME/"+cc.DefaultBaseDir)
cmd.Flags().StringVar(&o.Manifest, "manifest", "", "Set package manifest file , defaults to {base-dir}/versions/v{version}/installation.manifest")
cmd.Flags().StringVar(&o.KubeType, "kube", "k3s", "Set kube type, e.g., k3s or k8s")
cmd.Flags().StringVar(&o.DownloadCdnUrl, "download-cdn-url", "", "Set the CDN accelerated download address in the format https://example.cdn.com. If not set, the default download address will be used")
}

Some files were not shown because too many files have changed in this diff Show More