Compare commits

...

45 Commits

Author SHA1 Message Date
lovehunter9
05d14de4fe fix: files sync paste dir out bug 2025-07-15 21:16:34 +08:00
wiy
058cf31e44 system-frontend&user-service: update user-service & system-frontend new version (#1544)
* feat(user-service): update dataStore use redis

* feat(wise): remove from system-frontend
fix(settings): some bugs
fix(files): some bugs

* knowledge: remove knowledge, rss, argo

---------

Co-authored-by: eball <liuy102@hotmail.com>
2025-07-15 00:39:01 +08:00
hysyeah
72a5b2c6a2 app-service, bfl, cli, authelia,kubesphere: support create user from user cr (#1543)
* app-service, bfl, cli, authelia,kubesphere: support create user by cr

* fix: rm kubesphere-monitoring-federated ns
2025-07-14 23:48:53 +08:00
eball
f78890b01b otel: disable telemetry by default (#1542) 2025-07-14 23:48:18 +08:00
eball
13df294653 olaresd: refactor api server (#1541) 2025-07-14 23:47:55 +08:00
0x7fffff92
2af86e161a fix(headscale): Make the Affinity Rule Strict (#1540)
* fix(headscale): Make the Affinity Rule Strict

* fix(headscale): make ci happy

---------

Co-authored-by: 0x7fffff92 <0x7fffff92@example.com>
2025-07-14 23:47:25 +08:00
aby913
ee567c270c fix(files): external delete (#1539)
* fix(files): external delete

* login & system-frontend: update login and system-frontend new version

---------

Co-authored-by: qq815776412 <815776412@qq.com>
2025-07-12 00:23:59 +08:00
hysyeah
4246bcce06 fix: simplify nat permission request (#1538) 2025-07-12 00:23:10 +08:00
eball
fb73d62bd5 bfl: change unmount-api of file-server (#1537) 2025-07-12 00:22:27 +08:00
eball
209f0d15e3 authelia: send notification in user login phase (#1536)
* authelia: send notification in user login phase

* fix: set cookie nil

---------

Co-authored-by: hys <hysyeah@gmail.com>
2025-07-12 00:21:48 +08:00
dkeven
78911d44cf feat(gpu): add more metrics in GPU monitor API (#1535) 2025-07-12 00:20:41 +08:00
salt
d964c33c2d feat: Chinese uses both single-character segmentation and word segmen… (#1534)
feat: Chinese uses both single-character segmentation and word segmentation. Word segmentation is used for easier sorting.

Co-authored-by: ubuntu <you@example.com>
2025-07-11 22:00:14 +08:00
salt
2b54795e10 fix: waiting... Both uppercase and lowercase letters can be searched, include special token (#1533)
fix: Both uppercase and lowercase letters can be searched, and special characters can be searched as well.'

Co-authored-by: ubuntu <you@example.com>
2025-07-11 13:20:31 +08:00
aby913
efb4be4fcf fix(files): deletion and other fixes (#1532)
* fix(files): deletion and other fixes

* feat(files & marker): update files and market new version

* feat: update market worker count

* Update bfl_deploy.yaml

---------

Co-authored-by: qq815776412 <815776412@qq.com>
Co-authored-by: icebergtsn <zyh2433219116@gmail.com>
Co-authored-by: eball <liuy102@hotmail.com>
2025-07-11 00:35:46 +08:00
simon
89575096ba feat(knowledge): knowledge & download refactor (#1531)
* knowledge

* knowledge
2025-07-10 21:36:30 +08:00
dkeven
5edba60295 fix(cli): remove state files of olaresd when uninstalling (#1530) 2025-07-10 16:12:23 +08:00
eball
1aecc3495a ci: add a parameter of the code repository (#1529)
* ci: add a parameter of the code repository

* fix: file name bug

* refactor(cli): adjust local release command for vendor repo path

---------

Co-authored-by: dkeven <dkvvven@gmail.com>
2025-07-10 16:11:03 +08:00
salt
2d5c1fc484 feat: hybrid unigram search for title (#1528)
Co-authored-by: ubuntu <you@example.com>
2025-07-09 23:20:44 +08:00
hysyeah
81355f4a1c authelia: send login message to os.users.<olaresid> (#1527) 2025-07-09 23:20:13 +08:00
lovehunter9
2c4e9fb835 feat: seafile add support for avi, wmv, mkv, flv, rmvb (#1526) 2025-07-09 23:19:32 +08:00
dkeven
4947538e68 fix(daemon): apply filters correctly when listing users (#1525) 2025-07-09 23:18:39 +08:00
Peng Peng
21bb10b72b Revert "gpu: refactor gpu scheduler with cpp (#1475)"
This reverts commit ae3e4e6bb9.
2025-07-09 13:26:41 +08:00
wiy
8064c591f2 feat(files): files supports multiple nodes (#1524)
* feat(system-frontend): update files supports multiple nodes

* feat: add files routing gateway

* feat(media-server): surpport for multiple nodes

* feat(files): update files supports multiple nodes

---------

Co-authored-by: eball <liuy102@hotmail.com>
Co-authored-by: 0x7fffff92 <0x7fffff92@example.com>
Co-authored-by: aby913 <aby913@163.com>
2025-07-08 23:11:41 +08:00
Calvin W.
1073575a1d docs: add readmes for Olares components (#1522)
* docs: add readmes for Olares components

* merge with latest upstream
2025-07-08 21:34:05 +08:00
dkeven
4cf977f6df fix(ci): specify repo when checkout code for PR (#1523) 2025-07-08 17:53:46 +08:00
hysyeah
0dda3811c7 bfl, authelia, lldap: change access-token expiry duration, support refresh and revoke user token (#1521)
bfl, authelia, lldap: change access-token expiry duration and support refresh;revoke user token after reset password
2025-07-08 00:03:59 +08:00
hysyeah
2632b45fc2 bfl, app-service, system-frontend/dashboard: remove analytics (#1520)
* bfl, app-service: remove analytics

* fix(system-frontend): remove dashboard analytics

* fix(system-frontend): update system-frontend version

---------

Co-authored-by: yyh <24493052+yongheng2016@users.noreply.github.com>
2025-07-08 00:03:11 +08:00
berg
ae3f3d6a20 market: v1.12 new category and fix some bugs. (#1518)
feat: v1.12 new category and fix some bugs.
2025-07-05 00:55:37 +08:00
eball
4f3b824f48 authelia: update oidc cert (#1516) 2025-07-05 00:54:44 +08:00
hysyeah
9efa6df969 tapr: add default perm for nats subject (#1515)
fix: add default perm for nats subject
2025-07-05 00:54:01 +08:00
dkeven
045dfc11bc perf(ci): ignore more archs when releasing cli (#1514)
* perf(ci): ignore more archs when releasing cli

* Update auth_backend_deploy.yaml

---------

Co-authored-by: eball <liuy102@hotmail.com>
2025-07-04 18:45:36 +08:00
hysyeah
9913d29f81 studio-server: move studio server to os-framework (#1513) 2025-07-04 00:42:39 +08:00
berg
0ccf091aff market, settings: fix the problem of theme settings & settings apps status & market terminusInfo error (#1512)
feat: update market frontend and backend version
2025-07-04 00:41:54 +08:00
dkeven
01f3b27b8c feat(upgrade): update sysconf for specific versions (#1511) 2025-07-04 00:41:12 +08:00
dkeven
475faafec4 fix(cli): clear upgrade-related state files when uninstalling (#1510) 2025-07-03 21:01:07 +08:00
berg
31ab286a4b market, profile: fix display error in avatar selector's image list and clear market data when terminusId changed (#1509)
feat: update market frontend and backend version
2025-07-03 00:51:40 +08:00
eball
c9b4a40a1c olares: refactor installation manifest (#1508)
* olares: refactor installation manifest

* fix: file name typo

* fix: add http accept header

* fix: bug

* fix: bug

* fix: import json
2025-07-03 00:50:09 +08:00
simon
da19d00d08 fix(download): fix download task operation & reduce youtube API requests (#1507)
download
2025-07-02 21:49:49 +08:00
dkeven
49d233a55b fix(cli): also update local reserved ports when modifying sysconf (#1506) 2025-07-02 21:49:23 +08:00
dkeven
300aaa0753 fix(daemon): handle empty pid files when check process running (#1505) 2025-07-02 21:48:56 +08:00
berg
962b220440 market: add local chart upload socket event & update menu and add search function (#1504)
* fix: omit to gen entrance url before active

* feat: update market frontend and backend version

---------

Co-authored-by: hys <hysyeah@gmail.com>
2025-07-01 23:44:31 +08:00
salt
4da25bca36 fix: when need physical path, miss use frontend_resource_uri (#1500)
* fix: 1. fix: like 'why-olares.md', if input 'why', 'olares', search without result 2.when generate_monitor_folder_path_list for convert_from_physical_path_to_frontend_resource_uri not propagate error

* fix: search3 fix when need physical path miss use frontend_resource_ui

* fix: use wrong image

---------

Co-authored-by: ubuntu <you@example.com>
2025-07-01 23:32:34 +08:00
dkeven
42eff16695 feat(cli): config endpoint_pod_names in coredns when installing (#1503) 2025-07-01 20:35:42 +08:00
dkeven
450aa19dfc fix(cli): also reserve local ports for l4-proxied service (#1502) 2025-07-01 20:35:20 +08:00
eball
c750f6f85b infisical: create user error (#1501) 2025-07-01 20:33:18 +08:00
182 changed files with 2196 additions and 3927 deletions

View File

@@ -65,6 +65,7 @@ jobs:
with:
version: ${{ needs.test-version.outputs.version }}
ref: ${{ github.event.pull_request.head.ref }}
repository: ${{ github.event.pull_request.head.repo.full_name }}
upload-daemon:
needs: test-version
@@ -73,6 +74,7 @@ jobs:
with:
version: ${{ needs.test-version.outputs.version }}
ref: ${{ github.event.pull_request.head.ref }}
repository: ${{ github.event.pull_request.head.repo.full_name }}
push-image:
runs-on: ubuntu-latest
@@ -132,6 +134,7 @@ jobs:
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"
VERSION: ${{ needs.test-version.outputs.version }}
REPO_PATH: '${{ secrets.REPO_PATH }}'
run: |
bash build/deps-manifest.sh && bash build/upload-deps.sh
@@ -156,6 +159,7 @@ jobs:
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"
VERSION: ${{ needs.test-version.outputs.version }}
REPO_PATH: '${{ secrets.REPO_PATH }}'
run: |
export PATH=$PATH:/usr/local/bin:/home/ubuntu/.local/bin
bash build/deps-manifest.sh linux/arm64 && bash build/upload-deps.sh linux/arm64

View File

@@ -11,27 +11,13 @@ jobs:
- name: "Checkout source code"
uses: actions/checkout@v3
- name: Install coscmd
run: pip install coscmd
- name: Configure coscmd
env:
TENCENT_SECRET_ID: ${{ secrets.TENCENT_SECRET_ID }}
TENCENT_SECRET_KEY: ${{ secrets.TENCENT_SECRET_KEY }}
COS_BUCKET: ${{ secrets.COS_BUCKET }}
COS_REGION: ${{ secrets.COS_REGION }}
END_POINT: ${{ secrets.END_POINT }}
run: |
coscmd config -a $TENCENT_SECRET_ID \
-s $TENCENT_SECRET_KEY \
-b $COS_BUCKET \
-r $COS_REGION
# test
- env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"
REPO_PATH: '${{ secrets.REPO_PATH }}'
run: |
bash build/deps-manifest.sh && bash build/upload-deps.sh
@@ -42,28 +28,12 @@ jobs:
- name: "Checkout source code"
uses: actions/checkout@v3
- name: Install coscmd
run: pip install coscmd
- name: Configure coscmd
env:
TENCENT_SECRET_ID: ${{ secrets.TENCENT_SECRET_ID }}
TENCENT_SECRET_KEY: ${{ secrets.TENCENT_SECRET_KEY }}
COS_BUCKET: ${{ secrets.COS_BUCKET }}
COS_REGION: ${{ secrets.COS_REGION }}
END_POINT: ${{ secrets.END_POINT }}
run: |
export PATH=$PATH:/usr/local/bin:/home/ubuntu/.local/bin
coscmd config -m 10 -p 10 -a $TENCENT_SECRET_ID \
-s $TENCENT_SECRET_KEY \
-b $COS_BUCKET \
-r $COS_REGION
# test
- env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"
REPO_PATH: '${{ secrets.REPO_PATH }}'
run: |
export PATH=$PATH:/usr/local/bin:/home/ubuntu/.local/bin
bash build/deps-manifest.sh linux/arm64 && bash build/upload-deps.sh linux/arm64

View File

@@ -11,22 +11,6 @@ jobs:
- name: "Checkout source code"
uses: actions/checkout@v3
- name: Install coscmd
run: pip install coscmd
- name: Configure coscmd
env:
TENCENT_SECRET_ID: ${{ secrets.TENCENT_SECRET_ID }}
TENCENT_SECRET_KEY: ${{ secrets.TENCENT_SECRET_KEY }}
COS_BUCKET: ${{ secrets.COS_BUCKET }}
COS_REGION: ${{ secrets.COS_REGION }}
END_POINT: ${{ secrets.END_POINT }}
run: |
coscmd config -a $TENCENT_SECRET_ID \
-s $TENCENT_SECRET_KEY \
-b $COS_BUCKET \
-r $COS_REGION
# test
- env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
@@ -42,23 +26,6 @@ jobs:
- name: "Checkout source code"
uses: actions/checkout@v3
- name: Install coscmd
run: pip install coscmd
- name: Configure coscmd
env:
TENCENT_SECRET_ID: ${{ secrets.TENCENT_SECRET_ID }}
TENCENT_SECRET_KEY: ${{ secrets.TENCENT_SECRET_KEY }}
COS_BUCKET: ${{ secrets.COS_BUCKET }}
COS_REGION: ${{ secrets.COS_REGION }}
END_POINT: ${{ secrets.END_POINT }}
run: |
export PATH=$PATH:/usr/local/bin:/home/ubuntu/.local/bin
coscmd config -m 10 -p 10 -a $TENCENT_SECRET_ID \
-s $TENCENT_SECRET_KEY \
-b $COS_BUCKET \
-r $COS_REGION
- env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

View File

@@ -8,7 +8,17 @@ on:
required: true
ref:
type: string
repository:
type: string
workflow_dispatch:
inputs:
version:
type: string
required: true
ref:
type: string
repository:
type: string
jobs:
goreleaser:
runs-on: ubuntu-22.04
@@ -18,6 +28,7 @@ jobs:
with:
fetch-depth: 1
ref: ${{ inputs.ref }}
repository: ${{ inputs.repository }}
- name: Add Local Git Tag For GoReleaser
run: git tag ${{ inputs.version }}
@@ -51,6 +62,5 @@ jobs:
AWS_DEFAULT_REGION: "us-east-1"
run: |
cd cli/output && for file in *.tar.gz; do
aws s3 cp "$file" s3://terminus-os-install/$file --acl=public-read
# coscmd upload $file /$file
aws s3 cp "$file" s3://terminus-os-install${{ secrets.REPO_PATH }}${file} --acl=public-read
done

View File

@@ -8,7 +8,17 @@ on:
required: true
ref:
type: string
repository:
type: string
workflow_dispatch:
inputs:
version:
type: string
required: true
ref:
type: string
repository:
type: string
jobs:
goreleaser:
@@ -19,6 +29,7 @@ jobs:
with:
fetch-depth: 1
ref: ${{ inputs.ref }}
repository: ${{ inputs.repository }}
- name: Add Local Git Tag For GoReleaser
run: git tag ${{ inputs.version }}
@@ -54,5 +65,5 @@ jobs:
AWS_DEFAULT_REGION: 'us-east-1'
run: |
cd daemon/output && for file in *.tar.gz; do
aws s3 cp "$file" s3://terminus-os-install/$file --acl=public-read
aws s3 cp "$file" s3://terminus-os-install${{ secrets.REPO_PATH }}${file} --acl=public-read
done

View File

@@ -77,6 +77,7 @@ jobs:
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"
VERSION: ${{ needs.daily-version.outputs.version }}
REPO_PATH: '${{ secrets.REPO_PATH }}'
run: |
bash build/deps-manifest.sh && bash build/upload-deps.sh
@@ -94,6 +95,7 @@ jobs:
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"
VERSION: ${{ needs.daily-version.outputs.version }}
REPO_PATH: '${{ secrets.REPO_PATH }}'
run: |
export PATH=$PATH:/usr/local/bin:/home/ubuntu/.local/bin
bash build/deps-manifest.sh linux/arm64 && bash build/upload-deps.sh linux/arm64
@@ -121,8 +123,8 @@ jobs:
AWS_DEFAULT_REGION: 'us-east-1'
run: |
md5sum install-wizard-v${{ needs.daily-version.outputs.version }}.tar.gz > install-wizard-v${{ needs.daily-version.outputs.version }}.md5sum.txt && \
aws s3 cp install-wizard-v${{ needs.daily-version.outputs.version }}.md5sum.txt s3://terminus-os-install/install-wizard-v${{ needs.daily-version.outputs.version }}.md5sum.txt --acl=public-read && \
aws s3 cp install-wizard-v${{ needs.daily-version.outputs.version }}.tar.gz s3://terminus-os-install/install-wizard-v${{ needs.daily-version.outputs.version }}.tar.gz --acl=public-read && \
aws s3 cp install-wizard-v${{ needs.daily-version.outputs.version }}.md5sum.txt s3://terminus-os-install${{ secrets.REPO_PATH }}install-wizard-v${{ needs.daily-version.outputs.version }}.md5sum.txt --acl=public-read && \
aws s3 cp install-wizard-v${{ needs.daily-version.outputs.version }}.tar.gz s3://terminus-os-install${{ secrets.REPO_PATH }}install-wizard-v${{ needs.daily-version.outputs.version }}.tar.gz --acl=public-read && \
echo "md5sum=$(awk '{print $1}' install-wizard-v${{ needs.daily-version.outputs.version }}.md5sum.txt)" >> "$GITHUB_OUTPUT"

View File

@@ -80,8 +80,8 @@ jobs:
AWS_DEFAULT_REGION: 'us-east-1'
run: |
md5sum install-wizard-v${{ github.event.inputs.tags }}.tar.gz > install-wizard-v${{ github.event.inputs.tags }}.md5sum.txt && \
aws s3 cp install-wizard-v${{ github.event.inputs.tags }}.md5sum.txt s3://terminus-os-install/install-wizard-v${{ github.event.inputs.tags }}.md5sum.txt --acl=public-read && \
aws s3 cp install-wizard-v${{ github.event.inputs.tags }}.tar.gz s3://terminus-os-install/install-wizard-v${{ github.event.inputs.tags }}.tar.gz --acl=public-read
aws s3 cp install-wizard-v${{ github.event.inputs.tags }}.md5sum.txt s3://terminus-os-install${{ secrets.REPO_PATH }}install-wizard-v${{ github.event.inputs.tags }}.md5sum.txt --acl=public-read && \
aws s3 cp install-wizard-v${{ github.event.inputs.tags }}.tar.gz s3://terminus-os-install${{ secrets.REPO_PATH }}install-wizard-v${{ github.event.inputs.tags }}.tar.gz --acl=public-read
release:
runs-on: ubuntu-latest
@@ -101,7 +101,7 @@ jobs:
- name: Get checksum
id: vars
run: |
echo "version_md5sum=$(curl -sSfL https://dc3p1870nn3cj.cloudfront.net/install-wizard-v${{ github.event.inputs.tags }}.md5sum.txt|awk '{print $1}')" >> $GITHUB_OUTPUT
echo "version_md5sum=$(curl -sSfL https://dc3p1870nn3cj.cloudfront.net${{ secrets.REPO_PATH }}install-wizard-v${{ github.event.inputs.tags }}.md5sum.txt|awk '{print $1}')" >> $GITHUB_OUTPUT
- name: Update checksum
uses: eball/write-tag-to-version-file@latest

View File

@@ -108,20 +108,15 @@ Olares has been tested and verified on the following Linux platforms:
To get started with Olares on your own device, follow the [Getting Started Guide](https://docs.olares.com/manual/get-started/) for step-by-step instructions.
## Project navigation
> [!NOTE]
> We are currently consolidating Olares subproject code into this repository. This process may take a few months. Once finished, you will get a comprehensive view of the entire Olares system here.
This section lists the main directories in the Olares repository:
* **`apps`**: Contains the code for system applications, primarily for `larepass`.
* **`cli`**: Contains the code for `olares-cli`, the command-line interface tool for Olares.
* **`daemon`**: Contains the code for `olaresd`, the system daemon process.
* **[`apps`](./apps)**: Contains the code for system applications, primarily for `larepass`.
* **[`cli`](./cli)**: Contains the code for `olares-cli`, the command-line interface tool for Olares.
* **[`daemon`](./daemon)**: Contains the code for `olaresd`, the system daemon process.
* **`docs`**: Contains documentation for the project.
* **`framework`**: Contains the Olares system services.
* **`infrastructure`**: Contains code related to infrastructure components such as computing, storage, networking, and GPUs.
* **`platform`**: Contains code for cloud-native components like databases and message queues.
* **[`framework`](./framework)**: Contains the Olares system services.
* **[`infrastructure`](./infrastructure)**: Contains code related to infrastructure components such as computing, storage, networking, and GPUs.
* **[`platform`](./platform)**: Contains code for cloud-native components like databases and message queues.
* **`vendor`**: Contains code from third-party hardware vendors.
## Contributing to Olares

View File

@@ -110,19 +110,15 @@ Olares 已在以下 Linux 平台完成测试与验证:
参考[快速上手指南](https://docs.olares.cn/zh/manual/get-started/)安装并激活 Olares。
## 项目目录
> [!NOTE]
> 我们正将 Olares 子项目的代码移动到当前仓库。此过程可能会持续数月。届时您就可以通过本仓库了解 Olares 系统的全貌。
Olares 代码库中的主要目录如下:
* **`apps`**: 用于存放系统应用,主要是 `larepass` 的代码。
* **`cli`**: 用于存放 `olares-cli`Olares 的命令行界面工具)的代码。
* **`daemon`**: 用于存放 `olaresd`(系统守护进程)的代码。
* **[`apps`](./apps)**: 用于存放系统应用,主要是 `larepass` 的代码。
* **[`cli`](./cli)**: 用于存放 `olares-cli`Olares 的命令行界面工具)的代码。
* **[`daemon`](./daemon)**: 用于存放 `olaresd`(系统守护进程)的代码。
* **`docs`**: 用于存放 Olares 项目的文档。
* **`framework`**: 用来存放 Olares 系统服务代码。
* **`infrastructure`**: 用于存放计算存储网络GPU 等基础设施的代码。
* **`platform`**: 用于存放数据库、消息队列等云原生组件的代码。
* **[`framework`](./framework)**: 用来存放 Olares 系统服务代码。
* **[`infrastructure`](./infrastructure)**: 用于存放计算存储网络GPU 等基础设施的代码。
* **[`platform`](./platform)**: 用于存放数据库、消息队列等云原生组件的代码。
* **`vendor`**: 用于存放来自第三方硬件供应商的代码。
## 社区贡献

View File

@@ -110,18 +110,15 @@ Olaresは以下のLinuxプラットフォームで動作検証を完了してい
## プロジェクトナビゲーション
> [!NOTE]
> 現在、Olaresのサブプロジェクトのコードを当リポジトリへ移行する作業を進めています。この作業が完了するまでには数ヶ月を要する見込みです。完了後には、当リポジトリを通じてOlaresシステムの全貌をご覧いただけるようになります。
このセクションでは、Olares リポジトリ内の主要なディレクトリをリストアップしています:
* **`apps`**: システムアプリケーションのコードが含まれており、主に `larepass` 用です。
* **`cli`**: Olares のコマンドラインインターフェースツールである `olares-cli` のコードが含まれています。
* **`daemon`**: システムデーモンプロセスである `olaresd` のコードが含まれています。
* **[`apps`](./apps)**: システムアプリケーションのコードが含まれており、主に `larepass` 用です。
* **[`cli`](./cli)**: Olares のコマンドラインインターフェースツールである `olares-cli` のコードが含まれています。
* **[`daemon`](./daemon)**: システムデーモンプロセスである `olaresd` のコードが含まれています。
* **`docs`**: プロジェクトのドキュメントが含まれています。
* **`framework`**: Olares システムサービスが含まれています。
* **`infrastructure`**: コンピューティング、ストレージ、ネットワーキング、GPU などのインフラストラクチャコンポーネントに関連するコードが含まれています。
* **`platform`**: データベースやメッセージキューなどのクラウドネイティブコンポーネントのコードが含まれています。
* **[`framework`](./framework)**: Olares システムサービスが含まれています。
* **[`infrastructure`](./infrastructure)**: コンピューティング、ストレージ、ネットワーキング、GPU などのインフラストラクチャコンポーネントに関連するコードが含まれています。
* **[`platform`](./platform)**: データベースやメッセージキューなどのクラウドネイティブコンポーネントのコードが含まれています。
* **`vendor`**: サードパーティのハードウェアベンダーからのコードが含まれています。
## Olaresへの貢献

View File

@@ -1,296 +1,13 @@
{{- $namespace := printf "%s%s" "user-system-" .Values.bfl.username -}}
{{- $studio_secret := (lookup "v1" "Secret" $namespace "studio-secrets") -}}
{{- $pg_password := "" -}}
{{ if $studio_secret -}}
{{ $pg_password = (index $studio_secret "data" "pg_password") }}
{{ else -}}
{{ $pg_password = randAlphaNum 16 | b64enc }}
{{- end -}}
---
apiVersion: v1
kind: Secret
metadata:
name: studio-secrets
namespace: user-system-{{ .Values.bfl.username }}
type: Opaque
data:
pg_password: {{ $pg_password }}
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: studio-pg
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: studio
appNamespace: {{ .Release.Namespace }}
middleware: postgres
postgreSQL:
user: studio_{{ .Values.bfl.username }}
password:
valueFrom:
secretKeyRef:
key: pg_password
name: studio-secrets
databases:
- name: studio
---
apiVersion: v1
kind: Service
metadata:
name: studio-server
namespace: {{ .Release.Namespace }}
namespace: user-space-{{ .Values.bfl.username }}
spec:
selector:
app: studio-server
type: ExternalName
externalName: studio-server.os-framework.svc.cluster.local
ports:
- protocol: TCP
name: studio-server
port: 8080
targetPort: 8088
name: http
- protocol: TCP
port: 8083
targetPort: 8083
name: https
---
kind: Service
apiVersion: v1
metadata:
name: chartmuseum-studio
namespace: {{ .Release.Namespace }}
spec:
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8888
selector:
app: studio-server
---
apiVersion: v1
kind: ConfigMap
metadata:
name: studio-san-cnf
namespace: {{ .Release.Namespace }}
data:
san.cnf: |
[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no
[req_distinguished_name]
countryName = CN
stateOrProvinceName = Beijing
localityName = Beijing
0.organizationName = bytetrade
commonName = studio-server.{{ .Release.Namespace }}.svc
[v3_req]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @bytetrade
[bytetrade]
DNS.1 = studio-server.{{ .Release.Namespace }}.svc
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: studio-server
namespace: {{ .Release.Namespace }}
labels:
app: studio-server
applications.app.bytetrade.io/author: bytetrade.io
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: studio-server
template:
metadata:
labels:
app: studio-server
spec:
serviceAccountName: bytetrade-controller
volumes:
- name: chart
hostPath:
type: DirectoryOrCreate
path: '{{ .Values.userspace.appData}}/studio/Chart'
- name: data
hostPath:
type: DirectoryOrCreate
path: '{{ .Values.userspace.appData }}/studio/Data'
- name: storage-volume
hostPath:
path: '{{ .Values.userspace.appData }}/studio/helm-repo-dev'
type: DirectoryOrCreate
- name: config-san
configMap:
name: studio-san-cnf
items:
- key: san.cnf
path: san.cnf
- name: certs
emptyDir: {}
initContainers:
- name: init-chmod-data
image: busybox:1.28
imagePullPolicy: IfNotPresent
command:
- sh
- '-c'
- |
chown -R 1000:1000 /home/coder
chown -R 65532:65532 /charts
chown -R 65532:65532 /data
securityContext:
runAsUser: 0
resources: { }
volumeMounts:
- name: storage-volume
mountPath: /home/coder
- name: chart
mountPath: /charts
- name: data
mountPath: /data
- name: generate-certs
image: beclab/openssl:v3
imagePullPolicy: IfNotPresent
command: [ "/bin/sh", "-c" ]
args:
- |
openssl genrsa -out /etc/certs/ca.key 2048
openssl req -new -x509 -days 3650 -key /etc/certs/ca.key -out /etc/certs/ca.crt \
-subj "/CN=bytetrade CA/O=bytetrade/C=CN"
openssl req -new -newkey rsa:2048 -nodes \
-keyout /etc/certs/server.key -out /etc/certs/server.csr \
-config /etc/san/san.cnf
openssl x509 -req -days 3650 -in /etc/certs/server.csr \
-CA /etc/certs/ca.crt -CAkey /etc/certs/ca.key \
-CAcreateserial -out /etc/certs/server.crt \
-extensions v3_req -extfile /etc/san/san.cnf
chown -R 65532 /etc/certs/*
volumeMounts:
- name: config-san
mountPath: /etc/san
- name: certs
mountPath: /etc/certs
containers:
- name: studio
image: beclab/studio-server:v0.1.52
imagePullPolicy: IfNotPresent
args:
- server
ports:
- name: port
containerPort: 8088
protocol: TCP
- name: ssl-port
containerPort: 8083
protocol: TCP
volumeMounts:
- name: chart
mountPath: /charts
- name: data
mountPath: /data
- mountPath: /etc/certs
name: certs
- mountPath: /storage
name: storage-volume
lifecycle:
preStop:
exec:
command:
- "/studio"
- "clean"
env:
- name: BASE_DIR
value: /charts
- name: OS_API_KEY
value: {{ .Values.os.studio.appKey }}
- name: OS_API_SECRET
value: {{ .Values.os.studio.appSecret }}
- name: OS_SYSTEM_SERVER
value: system-server.user-system-{{ .Values.bfl.username }}
- name: NAME_SPACE
value: {{ .Release.Namespace }}
- name: OWNER
value: '{{ .Values.bfl.username }}'
- name: DB_HOST
value: citus-master-svc.user-system-{{ .Values.bfl.username }}
- name: DB_USERNAME
value: studio_{{ .Values.bfl.username }}
- name: DB_PASSWORD
value: "{{ $pg_password | b64dec }}"
- name: DB_NAME
value: user_space_{{ .Values.bfl.username }}_studio
- name: DB_PORT
value: "5432"
resources:
requests:
cpu: "50m"
memory: 100Mi
limits:
cpu: "0.5"
memory: 1000Mi
- name: chartmuseum
image: aboveos/helm-chartmuseum:v0.15.0
args:
- '--port=8888'
- '--storage-local-rootdir=/storage'
ports:
- name: http
containerPort: 8888
protocol: TCP
env:
- name: CHART_POST_FORM_FIELD_NAME
value: chart
- name: DISABLE_API
value: 'false'
- name: LOG_JSON
value: 'true'
- name: PROV_POST_FORM_FIELD_NAME
value: prov
- name: STORAGE
value: local
resources:
requests:
cpu: "50m"
memory: 100Mi
limits:
cpu: 1000m
memory: 512Mi
volumeMounts:
- name: storage-volume
mountPath: /storage
livenessProbe:
httpGet:
path: /health
port: http
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /health
port: http
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
targetPort: 8080

View File

@@ -42,6 +42,14 @@
{{ $user_service_pg_password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $user_service_redis_password := "" -}}
{{ if $user_service_secret -}}
{{ $user_service_redis_password = (index $user_service_secret "data" "redis_password") }}
{{ else -}}
{{ $user_service_redis_password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $user_service_nats_secret := (lookup "v1" "Secret" $namespace "user-service-nats-secret") -}}
{{- $nats_password := "" -}}
{{ if $user_service_nats_secret -}}
@@ -114,22 +122,6 @@ spec:
---
apiVersion: v1
kind: Service
metadata:
name: wise-svc
namespace: {{ .Release.Namespace }}
spec:
type: ClusterIP
selector:
app: system-frontend
ports:
- name: "frontend"
protocol: TCP
port: 80
targetPort: 84
---
apiVersion: v1
kind: Service
metadata:
name: headscale-svc
namespace: user-space-{{ .Values.bfl.username }}
@@ -254,11 +246,11 @@ metadata:
applications.app.bytetrade.io/group: 'true'
applications.app.bytetrade.io/author: bytetrade.io
annotations:
applications.app.bytetrade.io/icon: '{"dashboard":"https://file.bttcdn.com/appstore/dashboard/icon.png","control-hub":"https://file.bttcdn.com/appstore/control-hub/icon.png","profile":"https://file.bttcdn.com/appstore/profile/icon.png","wise":"https://file.bttcdn.com/appstore/rss/icon.png","headscale": "https://file.bttcdn.com/appstore/headscale/icon.png","settings": "https://file.bttcdn.com/appstore/settings/icon.png","studio":"https://file.bttcdn.com/appstore/devbox/icon.png","files":"https://file.bttcdn.com/appstore/files/icon.png","vault":"https://file.bttcdn.com/appstore/vault/icon.png","market":"https://file.bttcdn.com/appstore/appstore/icon.png"}'
applications.app.bytetrade.io/title: '{"dashboard": "Dashboard","control-hub":"Control Hub","profile":"Profile","wise":"Wise","headscale":"Headscale","settings":"Settings","studio":"Studio","files":"Files","vault":"Vault","market":"Market"}'
applications.app.bytetrade.io/version: '{"dashboard": "0.0.1","control-hub":"0.0.1","profile":"0.0.1","wise":"0.0.1","headscale":"0.0.1","settings":"0.0.1","studio":"0.0.1","files":"0.0.1","vault":"0.0.1","market":"0.0.1"}'
applications.app.bytetrade.io/icon: '{"dashboard":"https://file.bttcdn.com/appstore/dashboard/icon.png","control-hub":"https://file.bttcdn.com/appstore/control-hub/icon.png","profile":"https://file.bttcdn.com/appstore/profile/icon.png","headscale": "https://file.bttcdn.com/appstore/headscale/icon.png","settings": "https://file.bttcdn.com/appstore/settings/icon.png","studio":"https://file.bttcdn.com/appstore/devbox/icon.png","files":"https://file.bttcdn.com/appstore/files/icon.png","vault":"https://file.bttcdn.com/appstore/vault/icon.png","market":"https://file.bttcdn.com/appstore/appstore/icon.png"}'
applications.app.bytetrade.io/title: '{"dashboard": "Dashboard","control-hub":"Control Hub","profile":"Profile","headscale":"Headscale","settings":"Settings","studio":"Studio","files":"Files","vault":"Vault","market":"Market"}'
applications.app.bytetrade.io/version: '{"dashboard": "0.0.1","control-hub":"0.0.1","profile":"0.0.1","headscale":"0.0.1","settings":"0.0.1","studio":"0.0.1","files":"0.0.1","vault":"0.0.1","market":"0.0.1"}'
applications.app.bytetrade.io/policies: '{"dashboard":{"policies":[{"entranceName":"dashboard","uriRegex":"/js/script.js", "level":"public"},{"entranceName":"dashboard","uriRegex":"/js/api/send", "level":"public"}]}}'
applications.app.bytetrade.io/entrances: '{"dashboard":[{"name":"dashboard","host":"dashboard-service","port":80,"title":"Dashboard","windowPushState":true}],"control-hub":[{"name":"control-hub","host":"control-hub-service","port":80,"title":"Control Hub","windowPushState":true}],"profile":[{"name":"profile", "host":"profile-service", "port":80,"title":"Profile","windowPushState":true}],"wise":[{"name":"wise", "host":"wise-svc", "port":80,"title":"Wise","windowPushState":true}],"headscale":[{"name":"headscale", "host":"headscale-svc", "port":80,"title":"Headscale","invisible": true}],"settings":[{"name":"settings", "host":"settings-service", "port":80,"title":"Settings"}],"studio":[{"name":"studio","host":"studio-svc","port":8080,"title":"Studio","openMethod":"window"}],"files":[{"name":"files", "host":"files-fe-service", "port":80,"title":"Files","windowPushState":true}],"vault":[{"name":"vault", "host":"vault-service", "port":80,"title":"Vault","windowPushState":true}],"market":[{"name":"appstore", "host":"appstore-fe-service", "port":80,"title":"Market","windowPushState":true}]}'
applications.app.bytetrade.io/entrances: '{"dashboard":[{"name":"dashboard","host":"dashboard-service","port":80,"title":"Dashboard","windowPushState":true}],"control-hub":[{"name":"control-hub","host":"control-hub-service","port":80,"title":"Control Hub","windowPushState":true}],"profile":[{"name":"profile", "host":"profile-service", "port":80,"title":"Profile","windowPushState":true}],"headscale":[{"name":"headscale", "host":"headscale-svc", "port":80,"title":"Headscale","invisible": true}],"settings":[{"name":"settings", "host":"settings-service", "port":80,"title":"Settings"}],"studio":[{"name":"studio","host":"studio-svc","port":8080,"title":"Studio","openMethod":"window"}],"files":[{"name":"files", "host":"files-fe-service", "port":80,"title":"Files","windowPushState":true}],"vault":[{"name":"vault", "host":"vault-service", "port":80,"title":"Vault","windowPushState":true}],"market":[{"name":"appstore", "host":"appstore-fe-service", "port":80,"title":"Market","windowPushState":true}]}'
spec:
replicas: 1
selector:
@@ -270,10 +262,12 @@ spec:
app: system-frontend
io.bytetrade.app: "true"
annotations:
{{ if .Values.telemetry }}
instrumentation.opentelemetry.io/inject-nodejs: "olares-instrumentation"
instrumentation.opentelemetry.io/nodejs-container-names: "user-service"
instrumentation.opentelemetry.io/inject-nginx: "olares-instrumentation"
instrumentation.opentelemetry.io/inject-nginx-container-names: "system-frontend"
{{ end }}
spec:
priorityClassName: "system-cluster-critical"
initContainers:
@@ -351,7 +345,7 @@ spec:
- name: PGDB
value: user_space_{{ .Values.bfl.username }}_cloud_drive_integration
- name: system-frontend-init
image: beclab/system-frontend:v1.3.89
image: beclab/system-frontend:v1.3.102
imagePullPolicy: IfNotPresent
command:
- /bin/sh
@@ -394,7 +388,6 @@ spec:
- containerPort: 81
- containerPort: 82
- containerPort: 83
- containerPort: 84
- containerPort: 85
- containerPort: 86
- containerPort: 88
@@ -474,7 +467,7 @@ spec:
- name: NATS_SUBJECT_VAULT
value: os.vault.{{ .Values.bfl.username}}
- name: user-service
image: beclab/user-service:v0.0.20
image: beclab/user-service:v0.0.21
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
@@ -540,6 +533,15 @@ spec:
value: os.knowledge.{{ .Values.bfl.username}}
- name: NATS_SUBJECT_VAULT
value: os.vault.{{ .Values.bfl.username}}
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
key: redis_password
name: user-service-secrets
- name: REDIS_HOST
value: redis-cluster-proxy.user-system-guotest334
- name: REDIS_PORT
value: '6379'
- name: drive-server
image: beclab/drive:v0.0.72
@@ -769,21 +771,15 @@ spec:
secretKeyRef:
key: files_frontend_nats_password
name: files-frontend-nats-secrets
refs:
- appName: files-server
appNamespace: os
subjects:
- name: files-notify
perm:
- pub
- sub
- appName: user-service
appNamespace: os
subjects:
- name: "files.*"
perm:
- pub
- sub
subjects:
- name: files-notify
permission:
pub: allow
sub: allow
- name: files.{{ .Values.bfl.username }}
permission:
sub: allow
pub: allow
user: user-system-{{ .Values.bfl.username }}-files-frontend
---
apiVersion: v1
@@ -1259,6 +1255,7 @@ metadata:
type: Opaque
data:
pg_password: {{ $user_service_pg_password }}
redis_password: {{ $user_service_redis_password }}
---
apiVersion: v1
kind: Secret
@@ -1268,6 +1265,7 @@ metadata:
type: Opaque
data:
pg_password: {{ $user_service_pg_password }}
redis_password: {{ $user_service_redis_password }}
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
@@ -1288,6 +1286,23 @@ spec:
databases:
- name: user-service
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: user-service-redis
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: user-service
appNamespace: user-space-{{ .Values.bfl.username }}
middleware: redis
redis:
password:
valueFrom:
secretKeyRef:
key: redis_password
name: user-service-secrets
namespace: user-service
---
apiVersion: v1
kind: Service
metadata:
@@ -1584,8 +1599,6 @@ data:
prefix: "/images/upload"
route:
cluster: images
http_protocol_options:
accept_http_10: true
http_filters:
- name: envoy.filters.http.router
typed_config:
@@ -2129,78 +2142,36 @@ spec:
secretKeyRef:
key: nats_password
name: user-service-nats-secret
refs: []
subjects:
- export:
- appName: files-server
sub: allow
pub: allow
- appName: files-frontend
sub: allow
pub: allow
name: "files.*"
- name: "files.*"
permission:
pub: allow
sub: allow
- export:
- appName: notifications
sub: allow
pub: allow
name: "notification.*"
- name: "notification.*"
permission:
pub: allow
sub: allow
- export:
- appName: search-server
sub: allow
pub: allow
name: "search.*"
- name: "search.*"
permission:
pub: allow
sub: allow
- export:
- appName: seahub-server
sub: allow
pub: allow
name: "seahub.*"
- name: "seahub.*"
permission:
sub: allow
pub: allow
- export:
- appName: vault-server
sub: allow
pub: allow
name: "vault.*"
- name: "vault.*"
permission:
sub: allow
pub: allow
- export:
- appName: market-backend
sub: allow
pub: allow
- appName: app-service
sub: allow
pub: allow
name: "application.*"
- name: "application.*"
permission:
sub: allow
pub: allow
- export:
- appName: knowledge
sub: allow
pub: allow
- appName: download
sub: allow
pub: allow
name: "knowledge.*"
- name: "knowledge.*"
permission:
sub: allow
pub: allow
- export:
- appName: market-backend
sub: allow
pub: allow
name: "market.*"
- name: "market.*"
permission:
sub: allow
pub: allow

View File

@@ -0,0 +1,20 @@
# Olares Apps
## Overview
This directory contains the code for system applications, primarily for LarePass. The following are the pre-installed system applications that offer tools for managing files, knowledge, passwords, and the system itself.
## System Applications Overview
| Application | Description |
| --- | --- |
| Files | A file management app that manages and synchronizes files across devices and sources, enabling seamless sharing and access. |
| Wise | A local-first and AI-native modern reader that helps to collect, read, and manage information from various platforms. Users can run self-hosted recommendation algorithms to filter and sort online content. |
| Vault | A secure password manager for storing and managing sensitive information across devices. |
| Market | A decentralized and permissionless app store for installing, uninstalling, and updating applications and recommendation algorithms. |
| Desktop | A hub for managing and interacting with installed applications. File and application searching are also supported. |
| Profile | An app to customize the user's profile page. |
| Settings | A system configuration application. |
| Dashboard | An app for monitoring system resource usage. |
| Control Hub | The console for Olares, providing precise and autonomous control over the system and its environment. |
| DevBox | A development tool for building and deploying Olares applications. |

View File

@@ -6,7 +6,7 @@ metadata:
annotations:
iam.kubesphere.io/uninitialized: "true"
helm.sh/resource-policy: keep
bytetrade.io/owner-role: platform-admin
bytetrade.io/owner-role: owner
bytetrade.io/terminus-name: "{{.Values.user.terminus_name}}"
bytetrade.io/launcher-auth-policy: two_factor
bytetrade.io/launcher-access-level: "1"
@@ -23,4 +23,4 @@ spec:
groups:
- lldap_admin
status:
state: Active
state: Created

View File

@@ -24,6 +24,7 @@ cp ${BASE_DIR}/.dependencies/components ${BASE_DIR}/.manifest/.
cp ${BASE_DIR}/.dependencies/components ${BASE_DIR}/.manifest/.
pushd ${BASE_DIR}.manifest
bash ${BASE_DIR}/build-manifest.sh ${BASE_DIR}/../.manifest/installation.manifest
python3 ${BASE_DIR}/build-manifest.py ${BASE_DIR}/../.manifest/installation.manifest
popd

162
build/build-manifest.py Normal file
View File

@@ -0,0 +1,162 @@
#!/usr/bin/env python3
import argparse
import hashlib
import os
import requests
import sys
import json
CDN_URL = "https://dc3p1870nn3cj.cloudfront.net"
def download_checksum(name):
"""Downloads the checksum for a given name."""
url = f"{CDN_URL}/{name}.checksum.txt"
try:
response = requests.get(url)
response.raise_for_status()
return response.text.split()[0]
except requests.exceptions.RequestException as e:
print(f"Error getting checksum for {name} from {url}: {e}", file=sys.stderr)
sys.exit(1)
def get_image_manifest(name):
"""Downloads the image manifest for a given name."""
url = f"{CDN_URL}/{name}.manifest.json"
try:
response = requests.get(url)
response.raise_for_status()
return response.json()
except requests.exceptions.RequestException as e:
print(f"Error getting manifest for {name} from {url}: {e}", file=sys.stderr)
sys.exit(1)
def main():
"""Main function."""
parser = argparse.ArgumentParser()
parser.add_argument("manifest_file", help="The manifest file to write to.")
args = parser.parse_args()
manifest_file = args.manifest_file
version = os.environ.get("VERSION", "")
repo_path = os.environ.get("REPO_PATH", "/")
manifest_amd64_data = {}
manifest_arm64_data = {}
# Process components
try:
with open("components", "r") as f:
for line in f:
line = line.strip()
if not line:
continue
# Replace version
if version:
line = line.replace("#__VERSION__", version)
# Replace repo path
if repo_path:
line = line.replace("#__REPO_PATH__", repo_path)
fields = line.split(",")
if len(fields) < 5:
print(f"Format error in components file: {line}", file=sys.stderr)
sys.exit(1)
filename, path, deps, _, fileid = fields[:5]
print(f"Downloading file checksum for {filename}")
name = hashlib.md5(filename.encode()).hexdigest()
url_amd64 = name
url_arm64 = f"arm64/{name}"
checksum_amd64 = download_checksum(url_amd64)
checksum_arm64 = download_checksum(url_arm64)
manifest_amd64_data[filename] = {
"type": "component",
"path": path,
"deps": deps,
"url_amd64": url_amd64,
"checksum_amd64": checksum_amd64,
"fileid": fileid
}
manifest_arm64_data[filename] = {
"type": "component",
"path": path,
"deps": deps,
"url_arm64": url_arm64,
"checksum_arm64": checksum_arm64,
"fileid": fileid
}
except FileNotFoundError:
print("Error: 'components' file not found.", file=sys.stderr)
sys.exit(1)
# Process images
path = "images"
for deps_file in ["images.mf"]:
try:
with open(deps_file, "r") as f:
for line in f:
line = line.strip()
if not line:
continue
print(f"Downloading file checksum for {line}")
name = hashlib.md5(line.encode()).hexdigest()
url_amd64 = f"{name}.tar.gz"
url_arm64 = f"arm64/{name}.tar.gz"
checksum_amd64 = download_checksum(name)
checksum_arm64 = download_checksum(f"arm64/{name}")
# Get the image manifest
image_manifest_amd64 = get_image_manifest(name)
image_manifest_arm64 = get_image_manifest(f"arm64/{name}")
filename = f"{name}.tar.gz"
manifest_amd64_data[filename] = {
"type": "image",
"path": path,
"deps": deps_file,
"url_amd64": url_amd64,
"checksum_amd64": checksum_amd64,
"fileid": line,
"manifest": image_manifest_amd64
}
manifest_arm64_data[filename] = {
"type": "image",
"path": path,
"deps": deps_file,
"url_arm64": url_arm64,
"checksum_arm64": checksum_arm64,
"fileid": line,
"manifest": image_manifest_arm64
}
except FileNotFoundError:
print(f"Warning: '{deps_file}' not found, skipping.", file=sys.stderr)
sys.exit(1)
# Write the manifest file
amd64_manifest_file = f"{manifest_file}.amd64"
with open(amd64_manifest_file, "w") as mf:
json.dump(manifest_amd64_data, mf, indent=2)
arm64_manifest_file = f"{manifest_file}.arm64"
with open(arm64_manifest_file, "w") as mf:
json.dump(manifest_arm64_data, mf, indent=2)
# TODO: compress the manifest files
if __name__ == "__main__":
main()

View File

@@ -46,6 +46,9 @@ while read line; do
done < components
sed -i "s/#__VERSION__/${VERSION}/g" $manifest_file
path="${REPO_PATH:-/}"
sed -i "s|#__REPO_PATH__|${path}|g" $manifest_file
path="images"
for deps in "images.mf"; do
while read line; do

View File

@@ -16,6 +16,7 @@ rm -rf ${BASE_DIR}/../.dependencies
set -e
pushd ${BASE_DIR}/../.manifest
bash ${BASE_DIR}/build-manifest.sh ${BASE_DIR}/../.manifest/installation.manifest
python3 ${BASE_DIR}/build-manifest.py ${BASE_DIR}/../.manifest/installation.manifest
popd
pushd $DIST_PATH

View File

@@ -77,3 +77,5 @@ find $BASE_DIR/../ -type f -name Olares.yaml | while read f; do
done
sed -i "s/#__VERSION__/${VERSION}/g" ${manifest}
path="${REPO_PATH:-/}"
sed -i "s|#__REPO_PATH__|${path}|g" ${manifest}

200
build/get-manifest.py Normal file
View File

@@ -0,0 +1,200 @@
#!/usr/bin/env python3
import requests
import json
import argparse
import re
import sys
import platform
def parse_image_name(image_name):
"""
Parses a full image name into registry, repository, and reference (tag/digest).
Handles defaults for Docker Hub.
"""
# Default to 'latest' tag if no tag or digest is specified
if ":" not in image_name and "@" not in image_name:
image_name += ":latest"
# Split repository from reference (tag or digest)
if "@" in image_name:
repo_part, reference = image_name.rsplit("@", 1)
else:
repo_part, reference = image_name.rsplit(":", 1)
# Determine registry and repository
if "/" not in repo_part:
# This is an official Docker Hub image, e.g., "ubuntu"
registry = "registry-1.docker.io"
repository = f"library/{repo_part}"
else:
parts = repo_part.split("/")
# If the first part looks like a domain name, it's the registry
if "." in parts[0] or ":" in parts[0]:
registry = parts[0]
repository = "/".join(parts[1:])
else:
# A scoped Docker Hub image, e.g., "bitnami/nginx"
registry = "registry-1.docker.io"
repository = repo_part
return registry, repository, reference
def get_auth_token(registry, repository):
"""
Gets an authentication token from the registry's auth service.
"""
# First, probe the registry to get the auth challenge
try:
probe_url = f"https://{registry}/v2/"
response = requests.get(probe_url, timeout=10)
except requests.exceptions.RequestException as e:
print(f"Error: Could not connect to registry at {probe_url}. Details: {e}", file=sys.stderr)
sys.exit(1)
if response.status_code != 401:
# Either public or something is wrong, we can try without a token
return None
auth_header = response.headers.get("Www-Authenticate")
if not auth_header:
print(f"Error: Registry {registry} returned 401 but did not provide Www-Authenticate header.", file=sys.stderr)
sys.exit(1)
# Parse the Www-Authenticate header to find realm, service, and scope
try:
realm = re.search('realm="([^"]+)"', auth_header).group(1)
service = re.search('service="([^"]+)"', auth_header).group(1)
# Scope for the specific repository is needed
scope = f"repository:{repository}:pull"
except AttributeError:
print(f"Error: Could not parse Www-Authenticate header: {auth_header}", file=sys.stderr)
sys.exit(1)
# Request the actual token from the auth realm
auth_params = {
"service": service,
"scope": scope
}
try:
auth_response = requests.get(realm, params=auth_params, timeout=10)
auth_response.raise_for_status()
return auth_response.json().get("token")
except requests.exceptions.RequestException as e:
print(f"Error: Failed to get auth token from {realm}. Details: {e}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError:
print(f"Error: Failed to decode JSON response from auth server: {auth_response.text}", file=sys.stderr)
sys.exit(1)
def get_manifest(registry, repository, reference, token):
"""
Fetches the image manifest from the registry.
"""
manifest_url = f"https://{registry}/v2/{repository}/manifests/{reference}"
headers = {
# Request multiple manifest types, the registry will return the correct one
"Accept": "application/vnd.oci.image.index.v1+json, application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json"
}
if token:
headers["Authorization"] = f"Bearer {token}"
try:
response = requests.get(manifest_url, headers=headers, timeout=10)
response.raise_for_status()
return response.json()
except requests.exceptions.HTTPError as e:
if e.response.status_code == 401 and not token:
print("Error: Received 401 Unauthorized. Attempting to get a token...", file=sys.stderr)
# The initial probe might have passed, but manifest access requires auth.
# We re-run the token acquisition logic.
new_token = get_auth_token(registry, repository)
if new_token:
return get_manifest(registry, repository, reference, new_token)
print(f"Error: Failed to fetch manifest from {manifest_url}. Status: {e.response.status_code}", file=sys.stderr)
print(f"Response: {e.response.text}", file=sys.stderr)
sys.exit(1)
except requests.exceptions.RequestException as e:
print(f"Error: A network error occurred. Details: {e}", file=sys.stderr)
sys.exit(1)
def main():
parser = argparse.ArgumentParser(
description="Fetch an OCI/Docker image manifest from a container registry.",
epilog="""Examples:
python get_manifest.py ubuntu:22.04
python get_manifest.py quay.io/brancz/kube-rbac-proxy:v0.18.1 -o manifest.json
python get_manifest.py gcr.io/google-containers/pause:3.9""",
formatter_class=argparse.RawTextHelpFormatter
)
parser.add_argument("image_name", help="Full name of the container image (e.g., 'ubuntu:latest' or 'quay.io/prometheus/node-exporter:v1.7.0')")
parser.add_argument("-o", "--output-file", help="Optional. Path to write the final manifest JSON to. If not provided, prints to stdout.")
args = parser.parse_args()
registry, repository, reference = parse_image_name(args.image_name)
# Suppress informational prints if writing to a file
verbose_print = print if not args.output_file else lambda *a, **k: None
verbose_print(f"Registry: {registry}")
verbose_print(f"Repository: {repository}")
verbose_print(f"Reference: {reference}", end='\n\n', flush=True)
token = get_auth_token(registry, repository)
if not token and not args.output_file:
print("No authentication token needed or could be retrieved. Proceeding without token...", file=sys.stderr)
manifest = get_manifest(registry, repository, reference, token)
final_manifest = None
media_type = manifest.get("mediaType", "")
if "manifest.list" in media_type or "image.index" in media_type:
verbose_print("Detected a multi-platform image index. Finding manifest for current architecture...")
system_arch = platform.machine()
arch_map = {"x86_64": "amd64", "aarch64": "arm64"}
target_arch = arch_map.get(system_arch, system_arch)
verbose_print(f"System architecture: {system_arch} -> Target: linux/{target_arch}")
target_digest = None
for m in manifest.get("manifests", []):
plat = m.get("platform", {})
if plat.get("os") == "linux" and plat.get("architecture") == target_arch:
target_digest = m.get("digest")
break
if target_digest:
verbose_print(f"Found manifest for linux/{target_arch} with digest: {target_digest}\n")
final_manifest = get_manifest(registry, repository, target_digest, token)
else:
print(f"Error: Could not find a manifest for 'linux/{target_arch}' in the index.", file=sys.stderr)
if not args.output_file:
print("Available platforms:", file=sys.stderr)
for m in manifest.get("manifests", []):
print(f" - {m.get('platform', {}).get('os')}/{m.get('platform', {}).get('architecture')}", file=sys.stderr)
sys.exit(1)
else:
final_manifest = manifest
if final_manifest:
if args.output_file:
try:
with open(args.output_file, 'w') as f:
json.dump(final_manifest, f, indent=2)
print(f"Successfully wrote manifest to {args.output_file}")
except IOError as e:
print(f"Error: Could not write to file {args.output_file}. Details: {e}", file=sys.stderr)
sys.exit(1)
else:
print(json.dumps(final_manifest, indent=2))
if __name__ == "__main__":
main()

View File

@@ -23,26 +23,28 @@ while read line; do
continue
fi
bash ${BASE_DIR}/download-deps.sh $PLATFORM $line
if [ $? -ne 0 ]; then
exit -1
fi
filename=$(echo "$line"|awk -F"," '{print $1}')
echo "if exists $filename ... "
name=$(echo -n "$filename"|md5sum|awk '{print $1}')
checksum="$name.checksum.txt"
md5sum $name > $checksum
backup_file=$(awk '{print $1}' $checksum)
if [ x"$backup_file" == x"" ]; then
echo "invalid checksum"
exit 1
fi
echo "if exists $filename ... "
curl -fsSLI https://dc3p1870nn3cj.cloudfront.net/$path$name > /dev/null
if [ $? -ne 0 ]; then
code=$(curl -o /dev/null -fsSLI -w "%{http_code}" https://dc3p1870nn3cj.cloudfront.net/$path$name.tar.gz)
code=$(curl -o /dev/null -fsSLI -w "%{http_code}" https://dc3p1870nn3cj.cloudfront.net/$path$name)
if [ $code -eq 403 ]; then
bash ${BASE_DIR}/download-deps.sh $PLATFORM $line
if [ $? -ne 0 ]; then
exit -1
fi
md5sum $name > $checksum
backup_file=$(awk '{print $1}' $checksum)
if [ x"$backup_file" == x"" ]; then
echo "invalid checksum"
exit 1
fi
set -ex
aws s3 cp $name s3://terminus-os-install/$path$name --acl=public-read
aws s3 cp $name s3://terminus-os-install/backup/$path$backup_file --acl=public-read

View File

@@ -10,6 +10,7 @@ cat $1|while read image; do
echo "if exists $image ... "
name=$(echo -n "$image"|md5sum|awk '{print $1}')
checksum="$name.checksum.txt"
manifest="$name.manifest.json"
curl -fsSLI https://dc3p1870nn3cj.cloudfront.net/$path$name.tar.gz > /dev/null
if [ $? -ne 0 ]; then
@@ -68,48 +69,29 @@ cat $1|while read image; do
set +ex
else
if [ $code -ne 200 ]; then
echo "failed to check image"
echo "failed to check image checksum"
exit -1
fi
fi
fi
# upload to tencent cloud cos
# curl -fsSLI https://cdn.joinolares.cn/$path$name.tar.gz > /dev/null
# if [ $? -ne 0 ]; then
# set -e
# docker pull $image
# docker save $image -o $name.tar
# gzip $name.tar
# md5sum $name.tar.gz > $checksum
# coscmd upload ./$name.tar.gz /$path$name.tar.gz
# coscmd upload ./$checksum /$path$checksum
# echo "upload $name to cos completed"
# set +e
# fi
# # re-upload checksum.txt
# curl -fsSLI https://cdn.joinolares.cn/$path$checksum > /dev/null
# if [ $? -ne 0 ]; then
# set -e
# docker pull $image
# docker save $image -o $name.tar
# gzip $name.tar
# md5sum $name.tar.gz > $checksum
# coscmd upload ./$name.tar.gz /$path$name.tar.gz
# coscmd upload ./$checksum /$path$checksum
# echo "upload $name to cos completed"
# set +e
# fi
# upload manifest.json
curl -fsSLI https://dc3p1870nn3cj.cloudfront.net/$path$manifest > /dev/null
if [ $? -ne 0 ]; then
code=$(curl -o /dev/null -fsSLI -w "%{http_code}" https://dc3p1870nn3cj.cloudfront.net/$path$manifest)
if [ $code -eq 403 ]; then
set -ex
BASE_DIR=$(dirname $(realpath -s $0))
python3 $BASE_DIR/get-manifest.py $image -o $manifest
aws s3 cp $manifest s3://terminus-os-install/$path$manifest --acl=public-read
echo "upload $name manifest completed"
set +ex
else
if [ $code -ne 200 ]; then
echo "failed to check image manifest"
exit -1
fi
fi
fi
done

View File

@@ -17,8 +17,12 @@ builds:
ignore:
- goos: darwin
goarch: arm
- goos: darwin
goarch: amd64
- goos: windows
goarch: arm
- goos: windows
goarch: arm64
ldflags:
- -s
- -w

View File

@@ -1 +1,92 @@
# installer
# Olares CLI
This directory contains the code for **olares-cli**, the official command-line interface for administering an **Olares** cluster. It provides a modular, pipeline-based architecture for orchestrating complex system operations. See the full [Olares CLI Documentation](https://docs.olares.com/developer/install/cli-1.12/olares-cli.html) for command reference and tutorials.
Key responsibilities include:
- **Cluster management**: Installing, upgrading, restarting, and maintaining an Olares cluster.
- **Node management**: Adding to or removing nodes from an Olares cluster.
## Execution Model
For most of the commands, `olares-cli` is executed through a four-tier hierarchy:
```
Pipeline ➜ Module ➜ Task ➜ Action
````
### Example: `install-olares` Pipeline
```text
Pipeline: Install Olares
├── ...other modules
└── Module: Bootstrap OS
├── ...other tasks
├── Task: Check Prerequisites
│ └── Action: run-precheck.sh
└── Task: Configure System
└── Action: apply-sysctl
````
## Repository layout
```text
cli/
├── cmd/ # Cobra command definitions
│ ├── main.go # CLI entry point
│ └── ctl/
│ ├── root.go
│ ├── os/ # OS-level maintenance commands
│ ├── node/ # Cluster node operations
│ └── gpu/ # GPU management
└── pkg/
├── core/
│ ├── action/ # Re-usable action primitives
│ ├── module/ # Module abstractions
│ ├── pipeline/ # Pipeline abstractions
│ └── task/ # Task abstractions
└── pipelines/ # Pre-built pipelines
│ ├── ... # actual modules and tasks for various commands and components
```
## Build from source
### Prerequisites
* **Go 1.24+**
* **GoReleaser** (optional, for cross-compiling and packaging)
### Sample commands
```bash
# Clone the repo and enter the CLI folder
cd cli
# 1) Build for the host OS/ARCH
go build -o olares-cli ./cmd/main.go
# 2) Cross-compile for Linux amd64 (from macOS, for example)
GOOS=linux GOARCH=amd64 go build -o olares-cli ./cmd/main.go
# 3) Produce multi-platform artifacts (tar.gz, checksums, etc.)
goreleaser release --snapshot --clean
```
---
## Development workflow
### Add a new command
1. Create the command file in `cmd/ctl/<category>/`.
2. Define a pipeline in `pkg/pipelines/`.
3. Implement modules & tasks inside the relevant `pkg/` sub-packages.
### Test your build
1. Upload the self-built `olares-cli` binary to a machine that's running Olares.
2. Replace the existing `olares-cli` binary on the machine using `sudo cp -f olares-cli /usr/local/bin`.
3. Execute arbitrary commands using `olares-cli`

View File

@@ -60,7 +60,7 @@ echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-arptables = 1' >> /etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-iptables = 1' >> /etc/sysctl.conf
echo 'net.ipv4.ip_local_reserved_ports = 30000-32767' >> /etc/sysctl.conf
echo 'net.ipv4.ip_local_reserved_ports = 30000-32767,46800-50000' >> /etc/sysctl.conf
echo 'vm.max_map_count = 262144' >> /etc/sysctl.conf
echo 'fs.inotify.max_user_instances = 524288' >> /etc/sysctl.conf
echo 'kernel.pid_max = 65535' >> /etc/sysctl.conf
@@ -84,7 +84,7 @@ sed -r -i "s@#{0,}?net.ipv4.ip_forward ?= ?(0|1)@net.ipv4.ip_forward = 1@g" /et
sed -r -i "s@#{0,}?net.bridge.bridge-nf-call-arptables ?= ?(0|1)@net.bridge.bridge-nf-call-arptables = 1@g" /etc/sysctl.conf
sed -r -i "s@#{0,}?net.bridge.bridge-nf-call-ip6tables ?= ?(0|1)@net.bridge.bridge-nf-call-ip6tables = 1@g" /etc/sysctl.conf
sed -r -i "s@#{0,}?net.bridge.bridge-nf-call-iptables ?= ?(0|1)@net.bridge.bridge-nf-call-iptables = 1@g" /etc/sysctl.conf
sed -r -i "s@#{0,}?net.ipv4.ip_local_reserved_ports ?= ?([0-9]{1,}-{0,1},{0,1}){1,}@net.ipv4.ip_local_reserved_ports = 30000-32767@g" /etc/sysctl.conf
sed -r -i "s@#{0,}?net.ipv4.ip_local_reserved_ports ?= ?([0-9]{1,}-{0,1},{0,1}){1,}@net.ipv4.ip_local_reserved_ports = 30000-32767,46800-50000@g" /etc/sysctl.conf
sed -r -i "s@#{0,}?vm.max_map_count ?= ?([0-9]{1,})@vm.max_map_count = 262144@g" /etc/sysctl.conf
sed -r -i "s@#{0,}?fs.inotify.max_user_instances ?= ?([0-9]{1,})@fs.inotify.max_user_instances = 524288@g" /etc/sysctl.conf
sed -r -i "s@#{0,}?kernel.pid_max ?= ?([0-9]{1,})@kernel.pid_max = 65535@g" /etc/sysctl.conf

View File

@@ -1,17 +1,16 @@
package common
const (
NamespaceDefault = "default"
NamespaceKubeNodeLease = "kube-node-lease"
NamespaceKubePublic = "kube-public"
NamespaceKubeSystem = "kube-system"
NamespaceKubekeySystem = "kubekey-system"
NamespaceKubesphereControlsSystem = "kubesphere-controls-system"
NamespaceKubesphereMonitoringFederated = "kubesphere-monitoring-federated"
NamespaceKubesphereMonitoringSystem = "kubesphere-monitoring-system"
NamespaceKubesphereSystem = "kubesphere-system"
NamespaceOsFramework = "os-framework"
NamespaceOsPlatform = "os-platform"
NamespaceDefault = "default"
NamespaceKubeNodeLease = "kube-node-lease"
NamespaceKubePublic = "kube-public"
NamespaceKubeSystem = "kube-system"
NamespaceKubekeySystem = "kubekey-system"
NamespaceKubesphereControlsSystem = "kubesphere-controls-system"
NamespaceKubesphereMonitoringSystem = "kubesphere-monitoring-system"
NamespaceKubesphereSystem = "kubesphere-system"
NamespaceOsFramework = "os-framework"
NamespaceOsPlatform = "os-platform"
ChartNameRedis = "redis"
ChartNameSnapshotController = "snapshot-controller"

View File

@@ -133,8 +133,11 @@ type DisableTerminusdService struct {
}
func (s *DisableTerminusdService) Execute(runtime connector.Runtime) error {
if _, err := runtime.GetRunner().SudoCmd("systemctl disable --now olaresd", false, true); err != nil {
return errors.Wrap(errors.WithStack(err), "disable olaresd failed")
stdout, _ := runtime.GetRunner().SudoCmd("systemctl is-active olaresd", false, false)
if stdout == "active" {
if _, err := runtime.GetRunner().SudoCmd("systemctl disable --now olaresd", false, true); err != nil {
return errors.Wrap(errors.WithStack(err), "disable olaresd failed")
}
}
return nil
}
@@ -144,10 +147,18 @@ type UninstallTerminusd struct {
}
func (r *UninstallTerminusd) Execute(runtime connector.Runtime) error {
var olaresdFiles []string
svcpath := filepath.Join("/etc/systemd/system", templates.TerminusdService.Name())
svcenvpath := filepath.Join("/etc/systemd/system", templates.TerminusdEnv.Name())
if _, err := runtime.GetRunner().SudoCmd(fmt.Sprintf("rm -rf %s && rm -rf %s && rm -rf /usr/local/bin/olaresd", svcpath, svcenvpath), false, false); err != nil {
return errors.Wrap(errors.WithStack(err), "remove olaresd failed")
binPath := "/usr/local/bin/olaresd"
olaresdFiles = append(olaresdFiles, svcpath, svcenvpath, binPath)
for _, pidFile := range []string{"installing.pid", "changingip.pid"} {
olaresdFiles = append(olaresdFiles, filepath.Join(runtime.GetBaseDir(), pidFile))
}
for _, f := range olaresdFiles {
if _, err := runtime.GetRunner().SudoCmd(fmt.Sprintf("rm -rf %s", f), false, false); err != nil {
return errors.Wrap(errors.WithStack(err), "remove olaresd failed")
}
}
return nil
}

File diff suppressed because one or more lines are too long

View File

@@ -4,8 +4,6 @@
image:
# Overrides the image tag whose default is the chart appVersion.
ks_controller_manager_repo: kubesphere/ks-controller-manager
ks_controller_manager_tag: "v3.3.0"
ks_apiserver_repo: beclab/ks-apiserver
ks_apiserver_tag: "v3.3.0-ext-3"

View File

@@ -1,121 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ks-controller-manager
tier: backend
version: {{ .Chart.AppVersion }}
name: ks-controller-manager
spec:
strategy:
rollingUpdate:
maxSurge: 0
type: RollingUpdate
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: ks-controller-manager
tier: backend
# version: {{ .Chart.AppVersion }}
template:
metadata:
labels:
app: ks-controller-manager
tier: backend
# version: {{ .Chart.AppVersion }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- command:
- controller-manager
- --logtostderr=true
- --leader-elect=false
image: beclab/ks-controller-manager:0.0.21
imagePullPolicy: {{ .Values.image.pullPolicy }}
name: ks-controller-manager
ports:
- containerPort: 8080
protocol: TCP
resources:
{{- toYaml .Values.controller.resources | nindent 12 }}
volumeMounts:
- mountPath: /etc/kubesphere/
name: kubesphere-config
- mountPath: /etc/localtime
name: host-time
readOnly: true
{{- if .Values.controller.extraVolumeMounts }}
{{- toYaml .Values.controller.extraVolumeMounts | nindent 8 }}
{{- end }}
env:
{{- if .Values.env }}
{{- toYaml .Values.env | nindent 8 }}
{{- end }}
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
serviceAccountName: {{ include "ks-core.serviceAccountName" . }}
terminationGracePeriodSeconds: 30
volumes:
- name: kubesphere-config
configMap:
name: kubesphere-config
defaultMode: 420
- hostPath:
path: /etc/localtime
type: ""
name: host-time
{{- if .Values.controller.extraVolumes }}
{{ toYaml .Values.controller.extraVolumes | nindent 6 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- ks-controller-manager
namespaces:
- kubesphere-system
{{- with .Values.nodeAffinity }}
nodeAffinity:
{{ toYaml . | indent 10 }}
{{- end }}
---
apiVersion: v1
kind: Service
metadata:
labels:
app: ks-controller-manager
tier: backend
version: {{ .Chart.AppVersion }}
name: ks-controller-manager
spec:
ports:
- port: 443
protocol: TCP
targetPort: 8443
selector:
app: ks-controller-manager
tier: backend
# version: {{ .Chart.AppVersion }}
sessionAffinity: None
type: ClusterIP

View File

@@ -4,8 +4,6 @@
image:
# Overrides the image tag whose default is the chart appVersion.
ks_controller_manager_repo: kubesphere/ks-controller-manager
ks_controller_manager_tag: "v3.3.0"
ks_apiserver_repo: beclab/ks-apiserver
ks_apiserver_tag: "v3.3.0-ext-3"

View File

@@ -58,12 +58,12 @@ var kscorecrds = []map[string]string{
"resource": "default-http-backend",
"release": "ks-core",
},
{
"ns": "kubesphere-system",
"kind": "secrets",
"resource": "ks-controller-manager-webhook-cert",
"release": "ks-core",
},
//{
// "ns": "kubesphere-system",
// "kind": "secrets",
// "resource": "ks-controller-manager-webhook-cert",
// "release": "ks-core",
//},
{
"ns": "kubesphere-system",
"kind": "serviceaccounts",
@@ -100,24 +100,24 @@ var kscorecrds = []map[string]string{
"resource": "ks-apiserver",
"release": "ks-core",
},
{
"ns": "kubesphere-system",
"kind": "services",
"resource": "ks-controller-manager",
"release": "ks-core",
},
//{
// "ns": "kubesphere-system",
// "kind": "services",
// "resource": "ks-controller-manager",
// "release": "ks-core",
//},
{
"ns": "kubesphere-system",
"kind": "deployments",
"resource": "ks-apiserver",
"release": "ks-core",
},
{
"ns": "kubesphere-system",
"kind": "deployments",
"resource": "ks-controller-manager",
"release": "ks-core",
},
//{
// "ns": "kubesphere-system",
// "kind": "deployments",
// "resource": "ks-controller-manager",
// "release": "ks-core",
//},
//{
// "ns": "kubesphere-system",
// "kind": "validatingwebhookconfigurations",

View File

@@ -65,7 +65,7 @@ func (t *InitNamespace) Execute(runtime connector.Runtime) error {
kubectlpath = path.Join(common.BinDir, common.CommandKubectl)
}
for _, ns := range []string{common.NamespaceKubesphereControlsSystem, common.NamespaceKubesphereMonitoringFederated} {
for _, ns := range []string{common.NamespaceKubesphereControlsSystem} {
if stdout, err := runtime.GetRunner().Cmd(fmt.Sprintf("%s create ns %s", kubectlpath, ns), false, true); err != nil {
if !strings.Contains(stdout, "already exists") {
logger.Errorf("create ns %s failed: %v", ns, err)
@@ -98,8 +98,6 @@ func (t *InitNamespace) Execute(runtime connector.Runtime) error {
common.NamespaceKubeSystem,
common.NamespaceKubekeySystem,
common.NamespaceKubesphereControlsSystem,
common.NamespaceKubesphereMonitoringFederated,
common.NamespaceKubesphereMonitoringSystem,
common.NamespaceKubesphereSystem,
}

View File

@@ -355,7 +355,7 @@ func (c *Check) Execute(runtime connector.Runtime) error {
return fmt.Errorf("kubectl not found")
}
var labels = []string{"app=ks-apiserver", "app=ks-controller-manager"}
var labels = []string{"app=ks-apiserver"}
for _, label := range labels {
var cmd = fmt.Sprintf("%s get pod -n %s -l '%s' -o jsonpath='{.items[0].status.phase}'", kubectlpath, common.NamespaceKubesphereSystem, label)

View File

@@ -105,6 +105,7 @@ func (p *phaseBuilder) phaseInstall() *phaseBuilder {
&certs.UninstallCertsFilesModule{},
&storage.DeleteUserDataModule{},
&terminus.DeleteWizardFilesModule{},
&terminus.DeleteUpgradeFilesModule{},
&storage.RemoveJuiceFSModule{},
&storage.DeletePhaseFlagModule{
PhaseFile: common.TerminusStateFileInstalled,
@@ -132,24 +133,13 @@ func (p *phaseBuilder) phasePrepare() *phaseBuilder {
PhaseFile: common.TerminusStateFilePrepared,
BaseDir: p.runtime.GetBaseDir(),
},
&daemon.UninstallTerminusdModule{},
&terminus.RemoveReleaseFileModule{},
)
}
return p
}
func (p *phaseBuilder) phaseDownload() *phaseBuilder {
terminusdAction := &daemon.CheckTerminusdService{}
err := terminusdAction.Execute()
if p.convert() >= PhaseDownload {
if err == nil {
p.modules = append(p.modules, &daemon.UninstallTerminusdModule{})
}
}
return p
}
func (p *phaseBuilder) phaseMacos() {
p.modules = []module.Module{
&precheck.GreetingsModule{},
@@ -177,8 +167,7 @@ func UninstallTerminus(phase string, runtime *common.KubeRuntime) pipeline.Pipel
builder.
phaseInstall().
phaseStorage().
phasePrepare().
phaseDownload()
phasePrepare()
}
return pipeline.Pipeline{

View File

@@ -65,6 +65,7 @@ data:
health
ready
kubernetes {{ .DNSDomain }} in-addr.arpa ip6.arpa {
endpoint_pod_names
pods insecure
fallthrough in-addr.arpa ip6.arpa
}

View File

@@ -11,6 +11,7 @@ import (
type Builder struct {
olaresRepoRoot string
vendorRepoPath string
distPath string
version string
manifestManager *manifest.Manager
@@ -19,8 +20,13 @@ type Builder struct {
func NewBuilder(olaresRepoRoot, version, cdnURL string, ignoreMissingImages bool) *Builder {
distPath := filepath.Join(olaresRepoRoot, ".dist/install-wizard")
vendorRepoPath := os.Getenv("OLARES_VENDOR_REPO_PATH")
if vendorRepoPath == "" {
vendorRepoPath = "/"
}
return &Builder{
olaresRepoRoot: olaresRepoRoot,
vendorRepoPath: vendorRepoPath,
distPath: distPath,
version: version,
manifestManager: manifest.NewManager(olaresRepoRoot, distPath, cdnURL, ignoreMissingImages),
@@ -68,6 +74,9 @@ func (b *Builder) archive() (string, error) {
if err := util.ReplaceInFile(file, "#__VERSION__", b.version); err != nil {
return "", err
}
if err := util.ReplaceInFile(file, "#__REPO_PATH__", b.vendorRepoPath); err != nil {
return "", err
}
}
tarFile := filepath.Join(b.olaresRepoRoot, fmt.Sprintf("install-wizard-%s.tar.gz", versionStr))

View File

@@ -199,6 +199,23 @@ func (m *InstalledModule) Init() {
}
}
type DeleteUpgradeFilesModule struct {
common.KubeModule
}
func (d *DeleteUpgradeFilesModule) Init() {
d.Name = "DeleteUpgradeFiles"
deleteUpgradeFiles := &task.LocalTask{
Name: "DeleteUpgradeFiles",
Action: &DeleteUpgradeFiles{},
}
d.Tasks = []task.Interface{
deleteUpgradeFiles,
}
}
type DeleteWizardFilesModule struct {
common.KubeModule
}

View File

@@ -296,6 +296,30 @@ func (t *InstallFinished) Execute(runtime connector.Runtime) error {
return nil
}
type DeleteUpgradeFiles struct {
common.KubeAction
}
func (d *DeleteUpgradeFiles) Execute(runtime connector.Runtime) error {
baseDir := runtime.GetBaseDir()
files, err := os.ReadDir(baseDir)
if err != nil {
return errors.Wrapf(err, "failed to read directory %s", baseDir)
}
for _, file := range files {
if strings.HasPrefix(file.Name(), "upgrade.") {
filePath := path.Join(baseDir, file.Name())
if err := os.RemoveAll(filePath); err != nil && !os.IsNotExist(err) {
logger.Warnf("failed to delete %s: %v", filePath, err)
}
}
}
return nil
}
type DeleteWizardFiles struct {
common.KubeAction
}

View File

@@ -5,6 +5,7 @@ import (
"fmt"
"os"
"path"
"strings"
"time"
"github.com/beclab/Olares/cli/pkg/common"
@@ -221,3 +222,67 @@ func (u *UpgradeSystemComponents) Execute(runtime connector.Runtime) error {
}
return nil
}
type UpdateSysctlReservedPorts struct {
common.KubeAction
}
func (u *UpdateSysctlReservedPorts) Execute(runtime connector.Runtime) error {
const sysctlFile = "/etc/sysctl.conf"
const reservedPortsKey = "net.ipv4.ip_local_reserved_ports"
const expectedValue = "30000-32767,46800-50000"
content, err := os.ReadFile(sysctlFile)
if err != nil {
return fmt.Errorf("failed to read sysctl.conf: %v", err)
}
lines := strings.Split(string(content), "\n")
var foundKey bool
var needUpdate bool
var updatedLines []string
for _, line := range lines {
trimmedLine := strings.TrimSpace(line)
if strings.HasPrefix(trimmedLine, reservedPortsKey) {
foundKey = true
parts := strings.SplitN(trimmedLine, "=", 2)
if len(parts) == 2 {
currentValue := strings.TrimSpace(parts[1])
if currentValue != expectedValue {
logger.Infof("updating %s from %s to %s", reservedPortsKey, currentValue, expectedValue)
updatedLines = append(updatedLines, fmt.Sprintf("%s=%s", reservedPortsKey, expectedValue))
needUpdate = true
} else {
updatedLines = append(updatedLines, line)
}
} else {
updatedLines = append(updatedLines, line)
}
} else {
updatedLines = append(updatedLines, line)
}
}
if !foundKey {
logger.Infof("key %s not found in sysctl.conf, adding it", reservedPortsKey)
updatedLines = append(updatedLines, fmt.Sprintf("%s=%s", reservedPortsKey, expectedValue))
needUpdate = true
}
if needUpdate {
updatedContent := strings.Join(updatedLines, "\n")
if err := os.WriteFile(sysctlFile, []byte(updatedContent), 0644); err != nil {
return fmt.Errorf("failed to write updated sysctl.conf: %v", err)
}
if _, err := runtime.GetRunner().SudoCmd("sysctl -p", false, false); err != nil {
return fmt.Errorf("failed to reload sysctl: %v", err)
}
logger.Infof("updated and reloaded sysctl configuration")
} else {
logger.Debugf("%s already has the expected value: %s", reservedPortsKey, expectedValue)
}
return nil
}

View File

@@ -18,7 +18,16 @@ type UpgradeModule struct {
}
var (
preTasks []*upgradeTask
preTasks = []*upgradeTask{
{
Task: &task.LocalTask{
Name: "UpdateSysctlReservedPorts",
Action: new(UpdateSysctlReservedPorts),
},
Current: &explicitVersionMatcher{max: semver.New(1, 12, 0, "20250701", "")},
Target: anyVersion,
},
}
coreTasks = []*upgradeTask{
{

View File

@@ -5,5 +5,5 @@ output:
-
id: olaresd
name: olaresd-v#__VERSION__.tar.gz
amd64: https://dc3p1870nn3cj.cloudfront.net/olaresd-v#__VERSION__-linux-amd64.tar.gz
arm64: https://dc3p1870nn3cj.cloudfront.net/olaresd-v#__VERSION__-linux-arm64.tar.gz
amd64: https://dc3p1870nn3cj.cloudfront.net#__REPO_PATH__olaresd-v#__VERSION__-linux-amd64.tar.gz
arm64: https://dc3p1870nn3cj.cloudfront.net#__REPO_PATH__olaresd-v#__VERSION__-linux-arm64.tar.gz

170
daemon/README.md Normal file
View File

@@ -0,0 +1,170 @@
# Olares System Daemon (`olaresd`)
`olaresd` is the foundational process that boots on every Olares node. It runs as a `systemd` service on port `18088`, exposing a secure REST API for hardware abstraction, network orchestration, storage management, and turnkey cluster operations—all before Kubernetes starts.
Olaresd is installed as a systemd service in `/etc/systemd/system/olaresd.service`.
## Key features
- **System monitoring**: Continuous health checks of cluster and node status.
- **Cluster lifecycle management**: Automated install, upgrade, IP-switching, restart, and maintenance operations.
- **Hardware Abstraction**: USB auto-mounting, storage provisioning, and management.
- **Network Management**: mDNS service discovery, WiFi onboarding, and network interface control.
## REST API reference
The daemon provides an authenticated REST API (using signature-based auth):
**Base URL**: `http://<node-ip>:18088`
### System commands `/command/`
**Lifecycle operations**
| Method | Endpoint | Description |
|--------|-----------------------------|------------------------------|
| POST | `/command/install` | Install Olares |
| POST | `/command/uninstall` | Uninstall Olares |
| POST | `/command/upgrade` | Upgrade Olares |
| DELETE | `/command/upgrade` | Cancel upgrade |
| POST | `/command/reboot` | Reboot node |
| POST | `/command/shutdown` | Shutdown node |
**Network configuration**
| Method | Endpoint | Description |
|--------|-----------------------------|------------------------------|
| POST | `/command/connect-wifi` | Connect to WiFi |
| POST | `/command/change-host` | Change Olares IP binding |
**Storage management**
| Method | Endpoint | Description |
|--------|-----------------------------------|------------------------------------|
| POST | `/command/mount-samba` | Mount SMB shares |
| POST | `/command/v2/mount-samba` | Enhanced SMB mounting |
| POST | `/command/umount-samba` | Unmount SMB shares |
| POST | `/command/umount-samba-incluster` | Cluster-wide SMB unmount |
| POST | `/command/umount-usb` | Unmount USB device |
| POST | `/command/umount-usb-incluster` | Cluster-wide USB unmount |
**System Maintenance**
| Method | Endpoint | Description |
|--------|-----------------------------|-------------------------------------|
| POST | `/command/collect-logs` | Collect system logs for diagnostics |
---
### System information (`/system/`)
**System status**
| Method | Endpoint | Description |
|--------|--------------------------|-----------------------------|
| GET | `/system/status` | Get full system status |
| GET | `/system/ifs` | List network interfaces |
| GET | `/system/hosts-file` | View `/etc/hosts` |
| POST | `/system/hosts-file` | Update `/etc/hosts` |
**Mount information**
| Method | Endpoint | Description |
|--------|---------------------------------|--------------------------------|
| GET | `/system/mounted-usb` | Mounted USB devices |
| GET | `/system/mounted-hdd` | Mounted hard drives |
| GET | `/system/mounted-smb` | Mounted SMB shares |
| GET | `/system/mounted-path` | All mount points |
**Cluster-wide mounts**
| Method | Endpoint | Description |
|--------|--------------------------------------|----------------------------------|
| GET | `/system/mounted-usb-incluster` | USB mounts in cluster |
| GET | `/system/mounted-hdd-incluster` | HDD mounts in cluster |
| GET | `/system/mounted-smb-incluster` | SMB mounts in cluster |
| GET | `/system/mounted-path-incluster` | All cluster mounts |
---
### Container management (`/containerd/`)
**Registry Management**
| Method | Endpoint | Description |
|--------|-------------------------------------------|-------------------------------------|
| GET | `/containerd/registries` | List registries |
| GET | `/containerd/registry/mirrors/` | List registry mirrors |
| GET | `/containerd/registry/mirrors/:registry` | Get specific registry mirror |
| PUT | `/containerd/registry/mirrors/:registry` | Update registry mirror |
| DELETE | `/containerd/registry/mirrors/:registry` | Delete registry mirror |
**Image Management**
| Method | Endpoint | Description |
|--------|----------------------------------|--------------------------------|
| GET | `/containerd/images/` | List container images |
| DELETE | `/containerd/images/:image` | Delete specific image |
| POST | `/containerd/images/prune` | Remove unused images |
## Build from source
### Prerequisites
* Go 1.24+
* GoReleaser (Optional, for creating release artifacts)
### Steps
1. **Navigate to the daemon directory:**
```bash
cd daemon
```
2. **Build for your host OS/architecture:**
```bash
go build -o olaresd ./cmd/olaresd/main.go
```
3. **Cross-compile for another target (e.g., Linux AMD64):**
```bash
GOOS=linux GOARCH=amd64 go build -o olaresd ./cmd/olaresd/main.go
```
4. **Produce release artifacts (optional):**
```bash
goreleaser release --snapshot --clean
```
## Extend `olaresd`
To add a new command API:
1. **Define command**: Add a new command struct in `pkg/commands/`.
2. **Implement handler**: Create the corresponding HTTP handler logic in `internal/apiserver/handlers/`.
3. **Register route**: Register the new API route in `internal/apiserver/server.go`.
4. **Update state**: If the command modifies the cluster's state, ensure you update the logic in `pkg/cluster/state/`.
5. **Validate**: Run `go vet ./... && go test ./...` to check for issues and ensure all tests pass before opening a pull request.
### Test a custom build
1. Copy the binary to your Olares node.
2. On the node, replace the existing binary:
```bash
# Move the new binary into place
sudo cp -f /tmp/olaresd /usr/local/bin/
3. Restart the daemon to apply changes:
```
sudo systemctl restart olaresd
```

View File

@@ -49,10 +49,7 @@ func main() {
mainCtx, cancel := context.WithCancel(context.Background())
apis, err := apiserver.NewServer(mainCtx, port)
if err != nil {
panic(err)
}
apis := apiserver.NewServer(mainCtx, port)
if err := state.CheckCurrentStatus(mainCtx); err != nil {
klog.Error(err)

View File

@@ -6,6 +6,7 @@ toolchain go1.24.4
replace (
bytetrade.io/web3os/app-service => github.com/beclab/app-service v0.2.33
bytetrade.io/web3os/backups-sdk => github.com/Above-Os/backups-sdk v0.1.17
bytetrade.io/web3os/bfl => github.com/beclab/bfl v0.3.36
k8s.io/api => k8s.io/api v0.31.0
k8s.io/apimachinery => k8s.io/apimachinery v0.31.0

View File

@@ -0,0 +1,83 @@
package handlers
import (
"github.com/beclab/Olares/daemon/internel/apiserver/server"
changehost "github.com/beclab/Olares/daemon/pkg/commands/change_host"
collectlogs "github.com/beclab/Olares/daemon/pkg/commands/collect_logs"
connectwifi "github.com/beclab/Olares/daemon/pkg/commands/connect_wifi"
"github.com/beclab/Olares/daemon/pkg/commands/install"
mountsmb "github.com/beclab/Olares/daemon/pkg/commands/mount_smb"
"github.com/beclab/Olares/daemon/pkg/commands/reboot"
"github.com/beclab/Olares/daemon/pkg/commands/shutdown"
umountsmb "github.com/beclab/Olares/daemon/pkg/commands/umount_smb"
umountusb "github.com/beclab/Olares/daemon/pkg/commands/umount_usb"
"github.com/beclab/Olares/daemon/pkg/commands/uninstall"
"github.com/beclab/Olares/daemon/pkg/commands/upgrade"
"k8s.io/klog/v2"
)
func init() {
s := server.API
cmd := s.App.Group("command")
cmd.Post("/install", handlers.RequireSignature(
handlers.WaitServerRunning(
handlers.RunCommand(handlers.PostTerminusInit, install.New))))
cmd.Post("/uninstall", handlers.RequireSignature(
handlers.WaitServerRunning(
handlers.RunCommand(handlers.PostTerminusUninstall, uninstall.New))))
cmd.Post("/upgrade", handlers.RequireSignature(
handlers.WaitServerRunning(
handlers.RunCommand(handlers.RequestOlaresUpgrade, upgrade.NewCreateUpgradeTarget))))
cmd.Delete("/upgrade", handlers.RequireSignature(
handlers.RunCommand(handlers.CancelOlaresUpgrade, upgrade.NewRemoveUpgradeTarget)))
cmd.Post("/reboot", handlers.RequireSignature(
handlers.WaitServerRunning(
handlers.RunCommand(handlers.PostReboot, reboot.New))))
cmd.Post("/shutdown", handlers.RequireSignature(
handlers.WaitServerRunning(
handlers.RunCommand(handlers.PostShutdown, shutdown.New))))
cmd.Post("/connect-wifi", handlers.RequireSignature(
handlers.WaitServerRunning(
handlers.RunCommand(handlers.PostConnectWifi, connectwifi.New))))
cmd.Post("/change-host", handlers.RequireSignature(
handlers.WaitServerRunning(
handlers.RunCommand(handlers.PostChangeHost, changehost.New))))
cmd.Post("/umount-usb", handlers.RequireSignature(
handlers.WaitServerRunning(
handlers.RunCommand(handlers.PostUmountUsb, umountusb.New))))
cmd.Post("/umount-usb-incluster", handlers.RequireSignature(
handlers.WaitServerRunning(
handlers.RunCommand(handlers.PostUmountUsbInCluster, umountusb.New))))
cmd.Post("/collect-logs", handlers.RequireSignature(
handlers.WaitServerRunning(
handlers.RunCommand(handlers.PostCollectLogs, collectlogs.New))))
cmd.Post("/mount-samba", handlers.RequireSignature(
handlers.WaitServerRunning(
handlers.RunCommand(handlers.PostMountSambaDriver, mountsmb.New))))
cmd.Post("/umount-samba", handlers.RequireSignature(
handlers.WaitServerRunning(
handlers.RunCommand(handlers.PostUmountSmb, umountsmb.New))))
cmd.Post("/umount-samba-incluster", handlers.RequireSignature(
handlers.WaitServerRunning(
handlers.RunCommand(handlers.PostUmountSmbInCluster, umountsmb.New))))
cmdv2 := cmd.Group("v2")
cmdv2.Post("/mount-samba", handlers.RequireSignature(
handlers.WaitServerRunning(
handlers.RunCommand(handlers.PostMountSambaDriverV2, mountsmb.New))))
klog.Info("command handlers initialized")
}

View File

@@ -0,0 +1,28 @@
package handlers
import (
"github.com/beclab/Olares/daemon/internel/apiserver/server"
"k8s.io/klog/v2"
)
func init() {
s := server.API
containerd := s.App.Group("containerd")
containerd.Get("/registries", handlers.RequireSignature(handlers.ListRegistries))
registry := containerd.Group("registry")
mirrors := registry.Group("mirrors")
mirrors.Get("/", handlers.RequireSignature(handlers.GetRegistryMirrors))
mirrors.Get("/:registry", handlers.RequireSignature(handlers.GetRegistryMirror))
mirrors.Put("/:registry", handlers.RequireSignature(handlers.UpdateRegistryMirror))
mirrors.Delete("/:registry", handlers.RequireSignature(handlers.DeleteRegistryMirror))
image := containerd.Group("images")
image.Get("/", handlers.RequireSignature(handlers.ListImages))
image.Delete("/:image", handlers.RequireSignature(handlers.DeleteImage))
image.Post("/prune", handlers.RequireSignature(handlers.PruneImages))
klog.Info("containerd handlers initialized")
}

View File

@@ -1,4 +1,4 @@
package apiserver
package handlers
import (
"net/http"
@@ -13,7 +13,7 @@ type ChangeHostReq struct {
IP string `json:"ip"`
}
func (h *handlers) PostChangeHost(ctx *fiber.Ctx, cmd commands.Interface) error {
func (h *Handlers) PostChangeHost(ctx *fiber.Ctx, cmd commands.Interface) error {
var req ChangeHostReq
if err := h.ParseBody(ctx, &req); err != nil {
klog.Error("parse request error, ", err)

View File

@@ -1,4 +1,4 @@
package apiserver
package handlers
import (
"net/http"
@@ -8,7 +8,7 @@ import (
"k8s.io/klog/v2"
)
func (h *handlers) PostCollectLogs(ctx *fiber.Ctx, cmd commands.Interface) error {
func (h *Handlers) PostCollectLogs(ctx *fiber.Ctx, cmd commands.Interface) error {
_, err := cmd.Execute(ctx.Context(), nil)
if err != nil {
klog.Error("execute command error, ", err, ", ", cmd.OperationName().Stirng())

View File

@@ -1,4 +1,4 @@
package apiserver
package handlers
import (
"net/http"
@@ -14,7 +14,7 @@ type ConnectWifiReq struct {
SSID string `json:"ssid"`
}
func (h *handlers) PostConnectWifi(ctx *fiber.Ctx, cmd commands.Interface) error {
func (h *Handlers) PostConnectWifi(ctx *fiber.Ctx, cmd commands.Interface) error {
var req ConnectWifiReq
if err := h.ParseBody(ctx, &req); err != nil {
klog.Error("parse request error, ", err)

View File

@@ -1,4 +1,4 @@
package apiserver
package handlers
import (
"net/http"
@@ -8,7 +8,7 @@ import (
"k8s.io/klog/v2"
)
func (h *handlers) ListRegistries(ctx *fiber.Ctx) error {
func (h *Handlers) ListRegistries(ctx *fiber.Ctx) error {
images, err := containerd.ListRegistries(ctx)
if err != nil {
klog.Error("list registries error, ", err)
@@ -17,7 +17,7 @@ func (h *handlers) ListRegistries(ctx *fiber.Ctx) error {
return h.OkJSON(ctx, "success", images)
}
func (h *handlers) GetRegistryMirrors(ctx *fiber.Ctx) error {
func (h *Handlers) GetRegistryMirrors(ctx *fiber.Ctx) error {
mirrors, err := containerd.GetRegistryMirrors(ctx)
if err != nil {
klog.Error("get registry mirrors error, ", err)
@@ -27,7 +27,7 @@ func (h *handlers) GetRegistryMirrors(ctx *fiber.Ctx) error {
return h.OkJSON(ctx, "success", mirrors)
}
func (h *handlers) GetRegistryMirror(ctx *fiber.Ctx) error {
func (h *Handlers) GetRegistryMirror(ctx *fiber.Ctx) error {
mirror, err := containerd.GetRegistryMirror(ctx)
if err != nil {
klog.Error("get registry mirror error, ", err)
@@ -37,7 +37,7 @@ func (h *handlers) GetRegistryMirror(ctx *fiber.Ctx) error {
return h.OkJSON(ctx, "success", mirror)
}
func (h *handlers) UpdateRegistryMirror(ctx *fiber.Ctx) error {
func (h *Handlers) UpdateRegistryMirror(ctx *fiber.Ctx) error {
mirror, err := containerd.UpdateRegistryMirror(ctx)
if err != nil {
klog.Error("update registry mirror error, ", err)
@@ -47,7 +47,7 @@ func (h *handlers) UpdateRegistryMirror(ctx *fiber.Ctx) error {
return h.OkJSON(ctx, "success", mirror)
}
func (h *handlers) DeleteRegistryMirror(ctx *fiber.Ctx) error {
func (h *Handlers) DeleteRegistryMirror(ctx *fiber.Ctx) error {
if err := containerd.DeleteRegistryMirror(ctx); err != nil {
klog.Error("delete registry mirror error, ", err)
return h.ErrJSON(ctx, http.StatusInternalServerError, err.Error())
@@ -56,7 +56,7 @@ func (h *handlers) DeleteRegistryMirror(ctx *fiber.Ctx) error {
return h.OkJSON(ctx, "success")
}
func (h *handlers) ListImages(ctx *fiber.Ctx) error {
func (h *Handlers) ListImages(ctx *fiber.Ctx) error {
registry := ctx.Query("registry")
images, err := containerd.ListImages(ctx, registry)
if err != nil {
@@ -66,7 +66,7 @@ func (h *handlers) ListImages(ctx *fiber.Ctx) error {
return h.OkJSON(ctx, "success", images)
}
func (h *handlers) DeleteImage(ctx *fiber.Ctx) error {
func (h *Handlers) DeleteImage(ctx *fiber.Ctx) error {
if err := containerd.DeleteImage(ctx); err != nil {
klog.Error("delete image error, ", err)
return h.ErrJSON(ctx, http.StatusInternalServerError, err.Error())
@@ -74,7 +74,7 @@ func (h *handlers) DeleteImage(ctx *fiber.Ctx) error {
return h.OkJSON(ctx, "success")
}
func (h *handlers) PruneImages(ctx *fiber.Ctx) error {
func (h *Handlers) PruneImages(ctx *fiber.Ctx) error {
res, err := containerd.PruneImages(ctx)
if err != nil {
klog.Error("prune images error, ", err)

View File

@@ -1,4 +1,4 @@
package apiserver
package handlers
import (
"net/http"
@@ -9,7 +9,7 @@ import (
"k8s.io/klog/v2"
)
func (h *handlers) GetHostsfile(ctx *fiber.Ctx) error {
func (h *Handlers) GetHostsfile(ctx *fiber.Ctx) error {
items, err := nets.GetHostsFile()
if err != nil {
return h.ErrJSON(ctx, http.StatusServiceUnavailable, err.Error())
@@ -22,7 +22,7 @@ type writeHostsfileReq struct {
Items []*nets.HostsItem `json:"items"`
}
func (h *handlers) PostHostsfile(ctx *fiber.Ctx) error {
func (h *Handlers) PostHostsfile(ctx *fiber.Ctx) error {
var req writeHostsfileReq
if err := h.ParseBody(ctx, &req); err != nil {
klog.Error("parse request error, ", err)

View File

@@ -1,4 +1,4 @@
package apiserver
package handlers
import (
"net/http"
@@ -21,6 +21,7 @@ type NetIf struct {
Strength *int `json:"strength,omitempty"`
MTU int `json:"mtu,omitempty"`
InternetConnected *bool `json:"internetConnected,omitempty"`
Hostname string `json:"hostname,omitempty"` // Hostname of the device
Ipv4Gateway *string `json:"ipv4Gateway,omitempty"`
Ipv6Gateway *string `json:"ipv6Gateway,omitempty"`
@@ -34,7 +35,7 @@ type NetIf struct {
TxRate *float64 `json:"txRate,omitempty"` // in bytes per second
}
func (h *handlers) GetNetIfs(ctx *fiber.Ctx) error {
func (h *Handlers) GetNetIfs(ctx *fiber.Ctx) error {
test := ctx.Query("testConnectivity", "false")
ifaces, err := nets.GetInternalIpv4Addr(test != "true")
@@ -65,6 +66,7 @@ func (h *handlers) GetNetIfs(ctx *fiber.Ctx) error {
IP: i.IP,
IsHostIp: i.IP == hostip,
MTU: i.Iface.MTU,
Hostname: host,
}
if wifiDevs != nil {
@@ -137,8 +139,8 @@ func (h *handlers) GetNetIfs(ctx *fiber.Ctx) error {
return h.OkJSON(ctx, "", res)
}
func (h *handlers) findAp(ssid string) *ble.AccessPoint {
for _, ap := range h.apList {
func (h *Handlers) findAp(ssid string) *ble.AccessPoint {
for _, ap := range h.ApList {
if ap.SSID == ssid {
return &ap
}

View File

@@ -1,4 +1,4 @@
package apiserver
package handlers
import (
"net/http"
@@ -15,7 +15,7 @@ type MountReq struct {
Password string `json:"password"`
}
func (h *handlers) PostMountSambaDriver(ctx *fiber.Ctx, cmd commands.Interface) error {
func (h *Handlers) PostMountSambaDriver(ctx *fiber.Ctx, cmd commands.Interface) error {
var req MountReq
if err := h.ParseBody(ctx, &req); err != nil {
klog.Error("parse request error, ", err)

View File

@@ -1,4 +1,4 @@
package apiserver
package handlers
import (
"net/http"
@@ -17,7 +17,7 @@ type ListSmbResponse struct {
Mounted bool `json:"mounted"`
}
func (h *handlers) PostMountSambaDriverV2(ctx *fiber.Ctx, cmd commands.Interface) error {
func (h *Handlers) PostMountSambaDriverV2(ctx *fiber.Ctx, cmd commands.Interface) error {
var req MountReq
if err := h.ParseBody(ctx, &req); err != nil {
klog.Error("parse request error, ", err)

View File

@@ -1,4 +1,4 @@
package apiserver
package handlers
import (
"net/http"
@@ -9,7 +9,7 @@ import (
"k8s.io/klog/v2"
)
func (h *handlers) getMountedHdd(ctx *fiber.Ctx, mutate func(*disk.UsageStat) *disk.UsageStat) error {
func (h *Handlers) getMountedHdd(ctx *fiber.Ctx, mutate func(*disk.UsageStat) *disk.UsageStat) error {
paths, err := utils.MountedHddPath(ctx.Context())
if err != nil {
return h.ErrJSON(ctx, http.StatusInternalServerError, err.Error())
@@ -35,11 +35,11 @@ func (h *handlers) getMountedHdd(ctx *fiber.Ctx, mutate func(*disk.UsageStat) *d
return h.OkJSON(ctx, "success", res)
}
func (h *handlers) GetMountedHdd(ctx *fiber.Ctx) error {
func (h *Handlers) GetMountedHdd(ctx *fiber.Ctx) error {
return h.getMountedHdd(ctx, nil)
}
func (h *handlers) GetMountedHddInCluster(ctx *fiber.Ctx) error {
func (h *Handlers) GetMountedHddInCluster(ctx *fiber.Ctx) error {
return h.getMountedHdd(ctx, func(us *disk.UsageStat) *disk.UsageStat {
us.Path = nodePathToClusterPath(us.Path)
return us

View File

@@ -1,4 +1,4 @@
package apiserver
package handlers
import (
"net/http"
@@ -20,7 +20,7 @@ type mountedPath struct {
ReadOnly bool `json:"read_only"`
}
func (h *handlers) getMountedPath(ctx *fiber.Ctx, mutate func(*disk.UsageStat) *disk.UsageStat) error {
func (h *Handlers) getMountedPath(ctx *fiber.Ctx, mutate func(*disk.UsageStat) *disk.UsageStat) error {
paths, err := utils.MountedPath(ctx.Context())
if err != nil {
return h.ErrJSON(ctx, http.StatusInternalServerError, err.Error())
@@ -58,11 +58,11 @@ func (h *handlers) getMountedPath(ctx *fiber.Ctx, mutate func(*disk.UsageStat) *
return h.OkJSON(ctx, "success", res)
}
func (h *handlers) GetMountedPath(ctx *fiber.Ctx) error {
func (h *Handlers) GetMountedPath(ctx *fiber.Ctx) error {
return h.getMountedPath(ctx, nil)
}
func (h *handlers) GetMountedPathInCluster(ctx *fiber.Ctx) error {
func (h *Handlers) GetMountedPathInCluster(ctx *fiber.Ctx) error {
return h.getMountedPath(ctx, func(us *disk.UsageStat) *disk.UsageStat {
us.Path = nodePathToClusterPath(us.Path)
return us

View File

@@ -1,4 +1,4 @@
package apiserver
package handlers
import (
"net/http"
@@ -15,7 +15,7 @@ type mountedSmbPathResponse struct {
Device string `json:"device"`
}
func (h *handlers) getMountedSmb(ctx *fiber.Ctx, mutate func(*disk.UsageStat) *disk.UsageStat) error {
func (h *Handlers) getMountedSmb(ctx *fiber.Ctx, mutate func(*disk.UsageStat) *disk.UsageStat) error {
paths, err := utils.MountedSambaPath(ctx.Context())
if err != nil {
return h.ErrJSON(ctx, http.StatusInternalServerError, err.Error())
@@ -41,11 +41,11 @@ func (h *handlers) getMountedSmb(ctx *fiber.Ctx, mutate func(*disk.UsageStat) *d
return h.OkJSON(ctx, "success", res)
}
func (h *handlers) GetMountedSmb(ctx *fiber.Ctx) error {
func (h *Handlers) GetMountedSmb(ctx *fiber.Ctx) error {
return h.getMountedSmb(ctx, nil)
}
func (h *handlers) GetMountedSmbInCluster(ctx *fiber.Ctx) error {
func (h *Handlers) GetMountedSmbInCluster(ctx *fiber.Ctx) error {
return h.getMountedSmb(ctx, func(us *disk.UsageStat) *disk.UsageStat {
us.Path = nodePathToClusterPath(us.Path)
return us

View File

@@ -1,4 +1,4 @@
package apiserver
package handlers
import (
"net/http"
@@ -9,7 +9,7 @@ import (
"k8s.io/klog/v2"
)
func (h *handlers) getMountedUsb(ctx *fiber.Ctx, mutate func(*disk.UsageStat) *disk.UsageStat) error {
func (h *Handlers) getMountedUsb(ctx *fiber.Ctx, mutate func(*disk.UsageStat) *disk.UsageStat) error {
paths, err := utils.MountedUsbPath(ctx.Context())
if err != nil {
return h.ErrJSON(ctx, http.StatusInternalServerError, err.Error())
@@ -33,11 +33,11 @@ func (h *handlers) getMountedUsb(ctx *fiber.Ctx, mutate func(*disk.UsageStat) *d
return h.OkJSON(ctx, "success", res)
}
func (h *handlers) GetMountedUsb(ctx *fiber.Ctx) error {
func (h *Handlers) GetMountedUsb(ctx *fiber.Ctx) error {
return h.getMountedUsb(ctx, nil)
}
func (h *handlers) GetMountedUsbInCluster(ctx *fiber.Ctx) error {
func (h *Handlers) GetMountedUsbInCluster(ctx *fiber.Ctx) error {
return h.getMountedUsb(ctx, func(us *disk.UsageStat) *disk.UsageStat {
us.Path = nodePathToClusterPath(us.Path)
return us

View File

@@ -1,4 +1,4 @@
package apiserver
package handlers
import (
"fmt"
@@ -35,7 +35,7 @@ func (r *UpgradeReq) Check() error {
return nil
}
func (h *handlers) RequestOlaresUpgrade(ctx *fiber.Ctx, cmd commands.Interface) error {
func (h *Handlers) RequestOlaresUpgrade(ctx *fiber.Ctx, cmd commands.Interface) error {
var req UpgradeReq
if err := h.ParseBody(ctx, &req); err != nil {
klog.Error("parse request error, ", err)
@@ -60,7 +60,7 @@ func (h *handlers) RequestOlaresUpgrade(ctx *fiber.Ctx, cmd commands.Interface)
return h.OkJSON(ctx, "successfully created upgrade target")
}
func (h *handlers) CancelOlaresUpgrade(ctx *fiber.Ctx, cmd commands.Interface) error {
func (h *Handlers) CancelOlaresUpgrade(ctx *fiber.Ctx, cmd commands.Interface) error {
if _, err := cmd.Execute(ctx.Context(), nil); err != nil {
return h.ErrJSON(ctx, http.StatusBadRequest, err.Error())
}

View File

@@ -1,4 +1,4 @@
package apiserver
package handlers
import (
"net/http"
@@ -8,7 +8,7 @@ import (
"k8s.io/klog/v2"
)
func (h *handlers) PostReboot(ctx *fiber.Ctx, cmd commands.Interface) error {
func (h *Handlers) PostReboot(ctx *fiber.Ctx, cmd commands.Interface) error {
_, err := cmd.Execute(ctx.Context(), nil)
if err != nil {
klog.Error("execute command error, ", err, ", ", cmd.OperationName().Stirng())

View File

@@ -1,4 +1,4 @@
package apiserver
package handlers
import (
"net/http"
@@ -8,7 +8,7 @@ import (
"k8s.io/klog/v2"
)
func (h *handlers) PostShutdown(ctx *fiber.Ctx, cmd commands.Interface) error {
func (h *Handlers) PostShutdown(ctx *fiber.Ctx, cmd commands.Interface) error {
_, err := cmd.Execute(ctx.Context(), nil)
if err != nil {
klog.Error("execute command error, ", err, ", ", cmd.OperationName().Stirng())

View File

@@ -1,4 +1,4 @@
package apiserver
package handlers
import (
"net/http"
@@ -16,7 +16,7 @@ type TerminusInitReq struct {
Domain string `json:"domain"`
}
func (h *handlers) PostTerminusInit(ctx *fiber.Ctx, cmd commands.Interface) error {
func (h *Handlers) PostTerminusInit(ctx *fiber.Ctx, cmd commands.Interface) error {
var req TerminusInitReq
if err := h.ParseBody(ctx, &req); err != nil {
klog.Error("parse request error, ", err)

View File

@@ -1,10 +1,10 @@
package apiserver
package handlers
import (
"github.com/beclab/Olares/daemon/pkg/cluster/state"
"github.com/gofiber/fiber/v2"
)
func (h *handlers) GetTerminusState(ctx *fiber.Ctx) error {
func (h *Handlers) GetTerminusState(ctx *fiber.Ctx) error {
return h.OkJSON(ctx, "success", state.CurrentState)
}

View File

@@ -1,4 +1,4 @@
package apiserver
package handlers
import (
"net/http"
@@ -8,7 +8,7 @@ import (
"k8s.io/klog/v2"
)
func (h *handlers) PostTerminusUninstall(ctx *fiber.Ctx, cmd commands.Interface) error {
func (h *Handlers) PostTerminusUninstall(ctx *fiber.Ctx, cmd commands.Interface) error {
// run in background
_, err := cmd.Execute(h.mainCtx, nil)

View File

@@ -1,4 +1,4 @@
package apiserver
package handlers
import (
"net/http"
@@ -13,7 +13,7 @@ type UmountSmbReq struct {
Path string ``
}
func (h *handlers) umountSmbInNode(ctx *fiber.Ctx, cmd commands.Interface, pathInNode string) error {
func (h *Handlers) umountSmbInNode(ctx *fiber.Ctx, cmd commands.Interface, pathInNode string) error {
_, err := cmd.Execute(ctx.Context(), &umountsmb.Param{
MountPath: pathInNode,
})
@@ -25,7 +25,7 @@ func (h *handlers) umountSmbInNode(ctx *fiber.Ctx, cmd commands.Interface, pathI
return h.OkJSON(ctx, "success to umount")
}
func (h *handlers) PostUmountSmb(ctx *fiber.Ctx, cmd commands.Interface) error {
func (h *Handlers) PostUmountSmb(ctx *fiber.Ctx, cmd commands.Interface) error {
var req UmountSmbReq
if err := h.ParseBody(ctx, &req); err != nil {
klog.Error("parse request error, ", err)
@@ -38,7 +38,7 @@ func (h *handlers) PostUmountSmb(ctx *fiber.Ctx, cmd commands.Interface) error {
return h.umountSmbInNode(ctx, cmd, req.Path)
}
func (h *handlers) PostUmountSmbInCluster(ctx *fiber.Ctx, cmd commands.Interface) error {
func (h *Handlers) PostUmountSmbInCluster(ctx *fiber.Ctx, cmd commands.Interface) error {
var req UmountSmbReq
if err := h.ParseBody(ctx, &req); err != nil {
klog.Error("parse request error, ", err)

View File

@@ -1,4 +1,4 @@
package apiserver
package handlers
import (
"net/http"
@@ -13,7 +13,7 @@ type UmountReq struct {
Path string ``
}
func (h *handlers) umountUsbInNode(ctx *fiber.Ctx, cmd commands.Interface, pathInNode string) error {
func (h *Handlers) umountUsbInNode(ctx *fiber.Ctx, cmd commands.Interface, pathInNode string) error {
_, err := cmd.Execute(ctx.Context(), &umountusb.Param{
Path: pathInNode,
})
@@ -25,7 +25,7 @@ func (h *handlers) umountUsbInNode(ctx *fiber.Ctx, cmd commands.Interface, pathI
return h.OkJSON(ctx, "success to umount")
}
func (h *handlers) PostUmountUsb(ctx *fiber.Ctx, cmd commands.Interface) error {
func (h *Handlers) PostUmountUsb(ctx *fiber.Ctx, cmd commands.Interface) error {
var req UmountReq
if err := h.ParseBody(ctx, &req); err != nil {
klog.Error("parse request error, ", err)
@@ -38,7 +38,7 @@ func (h *handlers) PostUmountUsb(ctx *fiber.Ctx, cmd commands.Interface) error {
return h.umountUsbInNode(ctx, cmd, req.Path)
}
func (h *handlers) PostUmountUsbInCluster(ctx *fiber.Ctx, cmd commands.Interface) error {
func (h *Handlers) PostUmountUsbInCluster(ctx *fiber.Ctx, cmd commands.Interface) error {
var req UmountReq
if err := h.ParseBody(ctx, &req); err != nil {
klog.Error("parse request error, ", err)

View File

@@ -1,4 +1,4 @@
package apiserver
package handlers
import (
"context"
@@ -10,12 +10,19 @@ import (
"github.com/gofiber/fiber/v2"
)
type handlers struct {
type Handlers struct {
mainCtx context.Context
apList []ble.AccessPoint
ApList []ble.AccessPoint
}
func (h *handlers) ParseBody(ctx *fiber.Ctx, value any) error {
var handlers *Handlers = &Handlers{}
func NewHandlers(ctx context.Context) *Handlers {
handlers.mainCtx = ctx
return handlers
}
func (h *Handlers) ParseBody(ctx *fiber.Ctx, value any) error {
err := ctx.BodyParser(value)
if err != nil {
@@ -35,7 +42,7 @@ func (h *handlers) ParseBody(ctx *fiber.Ctx, value any) error {
return nil
}
func (h *handlers) ErrJSON(ctx *fiber.Ctx, code int, message string, data ...interface{}) error {
func (h *Handlers) ErrJSON(ctx *fiber.Ctx, code int, message string, data ...interface{}) error {
switch len(data) {
case 0:
return ctx.Status(code).JSON(fiber.Map{
@@ -58,10 +65,10 @@ func (h *handlers) ErrJSON(ctx *fiber.Ctx, code int, message string, data ...int
}
func (h *handlers) OkJSON(ctx *fiber.Ctx, message string, data ...interface{}) error {
func (h *Handlers) OkJSON(ctx *fiber.Ctx, message string, data ...interface{}) error {
return h.ErrJSON(ctx, http.StatusOK, message, data...)
}
func (h *handlers) NeedChoiceJSON(ctx *fiber.Ctx, message string, data ...interface{}) error {
func (h *Handlers) NeedChoiceJSON(ctx *fiber.Ctx, message string, data ...interface{}) error {
return h.ErrJSON(ctx, http.StatusMultipleChoices, message, data...)
}

View File

@@ -1,4 +1,4 @@
package apiserver
package handlers
import (
"path/filepath"

View File

@@ -1,4 +1,4 @@
package apiserver
package handlers
import (
"net/http"
@@ -13,7 +13,7 @@ const (
SIGNATURE_HEADER = "X-Signature"
)
func (h *handlers) WaitServerRunning(next func(ctx *fiber.Ctx) error) func(ctx *fiber.Ctx) error {
func (h *Handlers) WaitServerRunning(next func(ctx *fiber.Ctx) error) func(ctx *fiber.Ctx) error {
return func(ctx *fiber.Ctx) error {
if state.CurrentState.TerminusdState != state.Running {
return h.ErrJSON(ctx, http.StatusForbidden, "server is not running, please wait and retry again later")
@@ -23,7 +23,7 @@ func (h *handlers) WaitServerRunning(next func(ctx *fiber.Ctx) error) func(ctx *
}
}
func (h *handlers) RequireSignature(next func(ctx *fiber.Ctx) error) func(ctx *fiber.Ctx) error {
func (h *Handlers) RequireSignature(next func(ctx *fiber.Ctx) error) func(ctx *fiber.Ctx) error {
return func(ctx *fiber.Ctx) error {
headers := ctx.GetReqHeaders()
signature, ok := headers[SIGNATURE_HEADER]
@@ -42,7 +42,7 @@ func (h *handlers) RequireSignature(next func(ctx *fiber.Ctx) error) func(ctx *f
}
}
func (h *handlers) RunCommand(next func(ctx *fiber.Ctx, cmd commands.Interface) error,
func (h *Handlers) RunCommand(next func(ctx *fiber.Ctx, cmd commands.Interface) error,
cmdNew func() commands.Interface) func(ctx *fiber.Ctx) error {
return func(ctx *fiber.Ctx) error {

View File

@@ -0,0 +1,25 @@
package handlers
import (
"github.com/beclab/Olares/daemon/internel/apiserver/server"
"k8s.io/klog/v2"
)
func init() {
s := server.API
system := s.App.Group("system")
system.Get("/status", handlers.RequireSignature(handlers.GetTerminusState))
system.Get("/ifs", handlers.RequireSignature(handlers.GetNetIfs))
system.Get("/hosts-file", handlers.RequireSignature(handlers.GetHostsfile))
system.Post("/hosts-file", handlers.RequireSignature(handlers.PostHostsfile))
system.Get("/mounted-usb", handlers.RequireSignature(handlers.GetMountedUsb))
system.Get("/mounted-hdd", handlers.RequireSignature(handlers.GetMountedHdd))
system.Get("/mounted-smb", handlers.RequireSignature(handlers.GetMountedSmb))
system.Get("/mounted-path", handlers.RequireSignature(handlers.GetMountedPath))
system.Get("/mounted-usb-incluster", handlers.RequireSignature(handlers.GetMountedUsbInCluster))
system.Get("/mounted-hdd-incluster", handlers.RequireSignature(handlers.GetMountedHddInCluster))
system.Get("/mounted-smb-incluster", handlers.RequireSignature(handlers.GetMountedSmbInCluster))
system.Get("/mounted-path-incluster", handlers.RequireSignature(handlers.GetMountedPathInCluster))
klog.Info("system handlers initialized")
}

View File

@@ -2,146 +2,26 @@ package apiserver
import (
"context"
"fmt"
"github.com/beclab/Olares/daemon/internel/apiserver/handlers"
"github.com/beclab/Olares/daemon/internel/apiserver/server"
"github.com/beclab/Olares/daemon/internel/ble"
changehost "github.com/beclab/Olares/daemon/pkg/commands/change_host"
collectlogs "github.com/beclab/Olares/daemon/pkg/commands/collect_logs"
connectwifi "github.com/beclab/Olares/daemon/pkg/commands/connect_wifi"
"github.com/beclab/Olares/daemon/pkg/commands/install"
mountsmb "github.com/beclab/Olares/daemon/pkg/commands/mount_smb"
"github.com/beclab/Olares/daemon/pkg/commands/reboot"
"github.com/beclab/Olares/daemon/pkg/commands/shutdown"
umountsmb "github.com/beclab/Olares/daemon/pkg/commands/umount_smb"
umountusb "github.com/beclab/Olares/daemon/pkg/commands/umount_usb"
"github.com/beclab/Olares/daemon/pkg/commands/uninstall"
"github.com/beclab/Olares/daemon/pkg/commands/upgrade"
"github.com/gofiber/fiber/v2"
"github.com/gofiber/fiber/v2/middleware/cors"
"github.com/gofiber/fiber/v2/middleware/logger"
"k8s.io/klog/v2"
)
type server struct {
handlers *handlers
port int
app *fiber.App
}
func NewServer(ctx context.Context, port int) *server.Server {
server.API.Port = port
h := handlers.NewHandlers(ctx)
func NewServer(ctx context.Context, port int) (*server, error) {
return &server{handlers: &handlers{mainCtx: ctx}, port: port}, nil
}
func (s *server) Start() error {
app := fiber.New()
s.app = app
app.Use(cors.New())
app.Use(logger.New())
cmd := app.Group("command")
cmd.Post("/install", s.handlers.RequireSignature(
s.handlers.WaitServerRunning(
s.handlers.RunCommand(s.handlers.PostTerminusInit, install.New))))
cmd.Post("/uninstall", s.handlers.RequireSignature(
s.handlers.WaitServerRunning(
s.handlers.RunCommand(s.handlers.PostTerminusUninstall, uninstall.New))))
cmd.Post("/upgrade", s.handlers.RequireSignature(
s.handlers.WaitServerRunning(
s.handlers.RunCommand(s.handlers.RequestOlaresUpgrade, upgrade.NewCreateUpgradeTarget))))
cmd.Delete("/upgrade", s.handlers.RequireSignature(
s.handlers.RunCommand(s.handlers.CancelOlaresUpgrade, upgrade.NewRemoveUpgradeTarget)))
cmd.Post("/reboot", s.handlers.RequireSignature(
s.handlers.WaitServerRunning(
s.handlers.RunCommand(s.handlers.PostReboot, reboot.New))))
cmd.Post("/shutdown", s.handlers.RequireSignature(
s.handlers.WaitServerRunning(
s.handlers.RunCommand(s.handlers.PostShutdown, shutdown.New))))
cmd.Post("/connect-wifi", s.handlers.RequireSignature(
s.handlers.WaitServerRunning(
s.handlers.RunCommand(s.handlers.PostConnectWifi, connectwifi.New))))
cmd.Post("/change-host", s.handlers.RequireSignature(
s.handlers.WaitServerRunning(
s.handlers.RunCommand(s.handlers.PostChangeHost, changehost.New))))
cmd.Post("/umount-usb", s.handlers.RequireSignature(
s.handlers.WaitServerRunning(
s.handlers.RunCommand(s.handlers.PostUmountUsb, umountusb.New))))
cmd.Post("/umount-usb-incluster", s.handlers.RequireSignature(
s.handlers.WaitServerRunning(
s.handlers.RunCommand(s.handlers.PostUmountUsbInCluster, umountusb.New))))
cmd.Post("/collect-logs", s.handlers.RequireSignature(
s.handlers.WaitServerRunning(
s.handlers.RunCommand(s.handlers.PostCollectLogs, collectlogs.New))))
cmd.Post("/mount-samba", s.handlers.RequireSignature(
s.handlers.WaitServerRunning(
s.handlers.RunCommand(s.handlers.PostMountSambaDriver, mountsmb.New))))
cmd.Post("/umount-samba", s.handlers.RequireSignature(
s.handlers.WaitServerRunning(
s.handlers.RunCommand(s.handlers.PostUmountSmb, umountsmb.New))))
cmd.Post("/umount-samba-incluster", s.handlers.RequireSignature(
s.handlers.WaitServerRunning(
s.handlers.RunCommand(s.handlers.PostUmountSmbInCluster, umountsmb.New))))
cmdv2 := cmd.Group("v2")
cmdv2.Post("/mount-samba", s.handlers.RequireSignature(
s.handlers.WaitServerRunning(
s.handlers.RunCommand(s.handlers.PostMountSambaDriverV2, mountsmb.New))))
system := app.Group("system")
system.Get("/status", s.handlers.RequireSignature(s.handlers.GetTerminusState))
system.Get("/ifs", s.handlers.RequireSignature(s.handlers.GetNetIfs))
system.Get("/hosts-file", s.handlers.RequireSignature(s.handlers.GetHostsfile))
system.Post("/hosts-file", s.handlers.RequireSignature(s.handlers.PostHostsfile))
system.Get("/mounted-usb", s.handlers.RequireSignature(s.handlers.GetMountedUsb))
system.Get("/mounted-hdd", s.handlers.RequireSignature(s.handlers.GetMountedHdd))
system.Get("/mounted-smb", s.handlers.RequireSignature(s.handlers.GetMountedSmb))
system.Get("/mounted-path", s.handlers.RequireSignature(s.handlers.GetMountedPath))
system.Get("/mounted-usb-incluster", s.handlers.RequireSignature(s.handlers.GetMountedUsbInCluster))
system.Get("/mounted-hdd-incluster", s.handlers.RequireSignature(s.handlers.GetMountedHddInCluster))
system.Get("/mounted-smb-incluster", s.handlers.RequireSignature(s.handlers.GetMountedSmbInCluster))
system.Get("/mounted-path-incluster", s.handlers.RequireSignature(s.handlers.GetMountedPathInCluster))
containerd := app.Group("containerd")
containerd.Get("/registries", s.handlers.RequireSignature(s.handlers.ListRegistries))
registry := containerd.Group("registry")
mirrors := registry.Group("mirrors")
mirrors.Get("/", s.handlers.RequireSignature(s.handlers.GetRegistryMirrors))
mirrors.Get("/:registry", s.handlers.RequireSignature(s.handlers.GetRegistryMirror))
mirrors.Put("/:registry", s.handlers.RequireSignature(s.handlers.UpdateRegistryMirror))
mirrors.Delete("/:registry", s.handlers.RequireSignature(s.handlers.DeleteRegistryMirror))
image := containerd.Group("images")
image.Get("/", s.handlers.RequireSignature(s.handlers.ListImages))
image.Delete("/:image", s.handlers.RequireSignature(s.handlers.DeleteImage))
image.Post("/prune", s.handlers.RequireSignature(s.handlers.PruneImages))
return app.Listen(fmt.Sprintf(":%d", s.port))
}
func (s *server) Shutdown() error {
klog.Info("shutdown api server")
if s.app == nil {
return nil
server.API.UpdateAps = func(aplist []ble.AccessPoint) {
h.ApList = aplist
}
return s.app.Shutdown()
}
func (s *server) UpdateAps(aplist []ble.AccessPoint) {
s.handlers.apList = aplist
s := server.API
s.App.Use(cors.New())
s.App.Use(logger.New())
return s
}

View File

@@ -0,0 +1,31 @@
package server
import (
"fmt"
"github.com/beclab/Olares/daemon/internel/ble"
"github.com/gofiber/fiber/v2"
"k8s.io/klog/v2"
)
type Server struct {
Port int
App *fiber.App
UpdateAps func(aplist []ble.AccessPoint)
}
var API *Server = &Server{
App: fiber.New(),
}
func (s *Server) Start() error {
return s.App.Listen(fmt.Sprintf(":%d", s.Port))
}
func (s *Server) Shutdown() error {
klog.Info("shutdown api server")
if s.App == nil {
return nil
}
return s.App.Shutdown()
}

View File

@@ -88,6 +88,10 @@ func isProcessRunning(pidfile string) (bool, error) {
return false, err
}
if len(strings.TrimSpace(string(pidData))) == 0 {
return false, nil
}
pid, err := strconv.Atoi(string(pidData))
if err != nil {
return false, err

View File

@@ -310,11 +310,16 @@ func ListUsers(ctx context.Context, client dynamic.Interface, filters ...Filter)
var userList []*unstructured.Unstructured
for _, u := range users.Items {
var skip bool
for _, filter := range filters {
if !filter(&u) {
continue
skip = true
break
}
}
if skip {
continue
}
userList = append(userList, &u)
}

33
framework/README.md Normal file
View File

@@ -0,0 +1,33 @@
# Olares Framework
## Overview
The application framework layer provides common functionality and interfaces for system and third-party applications.
## Sub-component overview
| Component | Description |
| --- | --- |
| [app-service](app-service) | Handles application lifecycle management and resource allocation. |
| [argo-workflow](argo-workflow) | A Kubernetes-native workflow engine for orchestrating parallel jobs. |
| [authelia](authelia) | An open-source authentication and authorization server that provides multi-factor authentication and single sign-on (SSO). |
| [backup-server](backup-server) | Supports backups for directories, applications, and clusters. |
| [bfl](bfl) | The Backend For Launcher service that aggregates backend interfaces and proxies requests for all system services. |
| [docker-nginx-headers-more](docker-nginx-headers-more) | A Docker image for Nginx with the `headers-more` module. |
| [files](files) | Provides essential file management services. |
| [headscale](headscale) | A self-hosted implementation of the Tailscale control server. |
| [infisical](infisical) | A tool for managing sensitive information and preventing secret leaks in Olares development. |
| [knowledge](knowledge) | Stores content such as web pages, videos, audio files, PDFs, and EPUBs that users collect. |
| [kube-state-metrics](kube-state-metrics) | A service that listens to the Kubernetes API server and generates metrics about the state of the objects. |
| [l4-bfl-proxy](l4-bfl-proxy) | A Layer 4 network proxy for BFL (Backend For Launcher). |
| [market](market) | A decentralized and permissionless app store for installing, uninstalling, and updating applications and recommendation algorithms. |
| [monitor](monitor) | Used for system monitoring and resource usage tracking. |
| [notifications](notifications) | Delivers system-wide notifications. |
| [osnode-init](osnode-init) | Initializes the Olares node. |
| [reverse-proxy](reverse-proxy) | Options include Cloudflare Tunnel, Olares Tunnel, and self-built FRP. |
| [rsshub](rsshub) | Generates RSS feeds for easier content subscription. |
| [seahub](seahub) | The web frontend for the Seafile file hosting platform. |
| [search3](search3) | Provides full-text search for stored content in Knowledge and Files. |
| [system-server](system-server) | Manages permissions for inter-application API calls and handles network routing between applications and database middlewares. |
| [upgrade](upgrade) | Supports automated system upgrades. |
| [vault](vault) | Protects sensitive data like accounts, passwords, and mnemonics. |

View File

@@ -1,147 +0,0 @@
{{ $analytics_rootpath := printf "%s%s" .Values.rootPath "/rootfs/analytics" }}
{{- $namespace := printf "%s" "os-framework" -}}
{{- $analytics_secret := (lookup "v1" "Secret" $namespace "analytics-secrets") -}}
{{- $pg_password := "" -}}
{{ if $analytics_secret -}}
{{ $pg_password = (index $analytics_secret "data" "pg_password") }}
{{ else -}}
{{ $pg_password = randAlphaNum 16 | b64enc }}
{{- end -}}
---
apiVersion: v1
kind: Secret
metadata:
name: analytics-secrets
namespace: {{ .Release.Namespace }}
type: Opaque
data:
pg_password: {{ $pg_password }}
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: analytics-pg
namespace: {{ .Release.Namespace }}
spec:
app: analytics
appNamespace: {{ .Release.Namespace }}
middleware: postgres
postgreSQL:
user: analytics_os_framework
password:
valueFrom:
secretKeyRef:
key: pg_password
name: analytics-secrets
databases:
- name: analytics
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: analytics-server
namespace: {{ .Release.Namespace }}
labels:
app: analytics-server
applications.app.bytetrade.io/author: bytetrade.io
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: analytics-server
template:
metadata:
labels:
app: analytics-server
spec:
initContainers:
- name: init-container
image: 'postgres:16.0-alpine3.18'
command:
- sh
- '-c'
- >-
echo -e "Checking for the availability of PostgreSQL Server deployment"; until psql -h $PGHOST -p $PGPORT -U $PGUSER -d $PGDB -c "SELECT 1"; do sleep 1; printf "-"; done; sleep 5; echo -e " >> PostgreSQL DB Server has started";
env:
- name: PGHOST
value: citus-0.citus-headless.os-platform
- name: PGPORT
value: "5432"
- name: PGUSER
value: analytics_os_framework
- name: PGPASSWORD
value: {{ $pg_password | b64dec }}
- name: PGDB
value: os_framework_analytics
containers:
- name: analytics-server
image: beclab/analytics-api:v0.0.7
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3010
env:
- name: PRISMA_ENGINES_CHECKSUM_IGNORE_MISSING
value: '1'
- name: PL_DATA_BACKEND
value: postgres
- name: PL_DATA_POSTGRES_HOST
value: citus-0.citus-headless.os-platform
- name: PL_DATA_POSTGRES_PORT
value: "5432"
- name: PL_DATA_POSTGRES_DATABASE
value: os_framework_analytics
- name: PL_DATA_POSTGRES_USER
value: analytics_os_framework
- name: PL_DATA_POSTGRES_PASSWORD
value: {{ $pg_password | b64dec }}
- name: DATABASE_URL
value: postgres://$(PL_DATA_POSTGRES_USER):$(PL_DATA_POSTGRES_PASSWORD)@$(PL_DATA_POSTGRES_HOST)/$(PL_DATA_POSTGRES_DATABASE)?sslmode=disable
---
apiVersion: v1
kind: Service
metadata:
name: analytics-server
namespace: {{ .Release.Namespace }}
spec:
type: ClusterIP
selector:
app: analytics-server
ports:
- name: server
protocol: TCP
port: 3010
targetPort: 3010
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: SysEventRegistry
metadata:
name: analytics-user-create-cb
namespace: {{ .Release.Namespace }}
spec:
type: subscriber
event: user.create
callback: http://analytics-server.{{ .Release.Namespace }}:3010/callback/create
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: SysEventRegistry
metadata:
name: analytics-user-delete-cb
namespace: {{ .Release.Namespace }}
spec:
type: subscriber
event: user.delete
callback: http://analytics-server.{{ .Release.Namespace }}:3010/callback/delete

View File

@@ -1,3 +0,0 @@
# analytics
https://github.com/beclab/analytic

View File

@@ -163,7 +163,7 @@ spec:
priorityClassName: "system-cluster-critical"
containers:
- name: app-service
image: beclab/app-service:0.3.47
image: beclab/app-service:0.3.52
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
@@ -179,7 +179,7 @@ spec:
- name: REQUIRE_PERMISSION_APPS
value: "vault,desktop,message,wise,search,appstore,notification,dashboard,settings,studio,profile"
- name: SYS_APPS
value: "analytics,market,auth,citus,desktop,did,docs,files,fsnotify,headscale,infisical,intentprovider,ksserver,message,mongo,monitoring,notifications,profile,redis,wise,recommend,seafile,search,search-admin,settings,systemserver,tapr,vault,video,zinc,accounts,control-hub,dashboard,nitro,system-frontend,studio"
value: "market,auth,citus,desktop,did,docs,files,fsnotify,headscale,infisical,intentprovider,ksserver,message,mongo,monitoring,notifications,profile,redis,wise,recommend,seafile,search,search-admin,settings,systemserver,tapr,vault,video,zinc,accounts,control-hub,dashboard,nitro,system-frontend,studio"
- name: GENERATED_APPS
value: "citus,mongo-cluster-cfg,mongo-cluster-mongos,mongo-cluster-rs0,frp-agent,l4-bfl-proxy,drc-redis-cluster,appdata-backend,argoworkflows,argoworkflow-workflow-controller,velero,kvrocks"
- name: WS_CONTAINER_IMAGE
@@ -398,7 +398,7 @@ spec:
hostNetwork: true
containers:
- name: image-service
image: beclab/image-service:0.3.47
image: beclab/image-service:0.3.50
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
@@ -409,7 +409,7 @@ spec:
fieldRef:
fieldPath: spec.nodeName
- name: SYS_APPS
value: "analytics,market,auth,citus,desktop,did,docs,files,fsnotify,headscale,infisical,intentprovider,ksserver,message,mongo,monitoring,notifications,profile,redis,wise,recommend,seafile,search,nitro,search-admin,settings,systemserver,tapr,vault,video,zinc,accounts,control-hub,dashboard"
value: "market,auth,citus,desktop,did,docs,files,fsnotify,headscale,infisical,intentprovider,ksserver,message,mongo,monitoring,notifications,profile,redis,wise,recommend,seafile,search,nitro,search-admin,settings,systemserver,tapr,vault,video,zinc,accounts,control-hub,dashboard"
volumeMounts:
- mountPath: /var/run/containerd
mountPropagation: Bidirectional
@@ -440,25 +440,29 @@ spec:
secretKeyRef:
key: nats_password
name: app-service-nats-secret
refs:
- appName: user-service
appNamespace: os
subjects:
- name: "application.*"
perm:
- pub
- sub
subjects:
- name: "application.*"
permission:
pub: allow
sub: allow
- name: application
permission:
pub: allow
sub: allow
- name: "users.*"
permission:
pub: allow
sub: allow
- name: users
permission:
pub: allow
sub: deny
sub: allow
- name: "groups.*"
permission:
pub: allow
sub: allow
- name: groups
permission:
pub: allow
sub: deny
sub: allow
user: os-app-service

View File

@@ -1,3 +1,6 @@
# app-service
# `app-service`
## Overview
The `app-service` component is a core part of the Olares framework, responsible for handling the lifecycle of applications. This includes managing their installation, updates, and removal, as well as overseeing resource allocation to ensure that all applications run smoothly and efficiently within the Olares ecosystem.
https://github.com/beclab/app-service

View File

@@ -1,38 +0,0 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: clusterworkflowtemplates.argoproj.io
annotations:
"helm.sh/resource-policy": keep
spec:
group: argoproj.io
names:
kind: ClusterWorkflowTemplate
listKind: ClusterWorkflowTemplateList
plural: clusterworkflowtemplates
shortNames:
- clusterwftmpl
- cwft
singular: clusterworkflowtemplate
scope: Cluster
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
type: object
x-kubernetes-map-type: atomic
x-kubernetes-preserve-unknown-fields: true
required:
- metadata
- spec
type: object
served: true
storage: true

View File

@@ -1,42 +0,0 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: cronworkflows.argoproj.io
annotations:
"helm.sh/resource-policy": keep
spec:
group: argoproj.io
names:
kind: CronWorkflow
listKind: CronWorkflowList
plural: cronworkflows
shortNames:
- cwf
- cronwf
singular: cronworkflow
scope: Namespaced
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
type: object
x-kubernetes-map-type: atomic
x-kubernetes-preserve-unknown-fields: true
status:
type: object
x-kubernetes-map-type: atomic
x-kubernetes-preserve-unknown-fields: true
required:
- metadata
- spec
type: object
served: true
storage: true

View File

@@ -1,43 +0,0 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: workflowartifactgctasks.argoproj.io
annotations:
"helm.sh/resource-policy": keep
spec:
group: argoproj.io
names:
kind: WorkflowArtifactGCTask
listKind: WorkflowArtifactGCTaskList
plural: workflowartifactgctasks
shortNames:
- wfat
singular: workflowartifactgctask
scope: Namespaced
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
type: object
x-kubernetes-map-type: atomic
x-kubernetes-preserve-unknown-fields: true
status:
type: object
x-kubernetes-map-type: atomic
x-kubernetes-preserve-unknown-fields: true
required:
- metadata
- spec
type: object
served: true
storage: true
subresources:
status: {}

View File

@@ -1,37 +0,0 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: workfloweventbindings.argoproj.io
annotations:
"helm.sh/resource-policy": keep
spec:
group: argoproj.io
names:
kind: WorkflowEventBinding
listKind: WorkflowEventBindingList
plural: workfloweventbindings
shortNames:
- wfeb
singular: workfloweventbinding
scope: Namespaced
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
type: object
x-kubernetes-map-type: atomic
x-kubernetes-preserve-unknown-fields: true
required:
- metadata
- spec
type: object
served: true
storage: true

View File

@@ -1,57 +0,0 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: workflows.argoproj.io
annotations:
"helm.sh/resource-policy": keep
spec:
group: argoproj.io
names:
kind: Workflow
listKind: WorkflowList
plural: workflows
shortNames:
- wf
singular: workflow
scope: Namespaced
versions:
- additionalPrinterColumns:
- description: Status of the workflow
jsonPath: .status.phase
name: Status
type: string
- description: When the workflow was started
format: date-time
jsonPath: .status.startedAt
name: Age
type: date
- description: Human readable message indicating details about why the workflow
is in this condition.
jsonPath: .status.message
name: Message
type: string
name: v1alpha1
schema:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
type: object
x-kubernetes-map-type: atomic
x-kubernetes-preserve-unknown-fields: true
status:
type: object
x-kubernetes-map-type: atomic
x-kubernetes-preserve-unknown-fields: true
required:
- metadata
- spec
type: object
served: true
storage: true
subresources: {}

View File

@@ -1,599 +0,0 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: workflowtaskresults.argoproj.io
annotations:
"helm.sh/resource-policy": keep
spec:
group: argoproj.io
names:
kind: WorkflowTaskResult
listKind: WorkflowTaskResultList
plural: workflowtaskresults
singular: workflowtaskresult
scope: Namespaced
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
message:
type: string
metadata:
type: object
outputs:
properties:
artifacts:
items:
properties:
archive:
properties:
none:
type: object
tar:
properties:
compressionLevel:
format: int32
type: integer
type: object
zip:
type: object
type: object
archiveLogs:
type: boolean
artifactGC:
properties:
podMetadata:
properties:
annotations:
additionalProperties:
type: string
type: object
labels:
additionalProperties:
type: string
type: object
type: object
serviceAccountName:
type: string
strategy:
enum:
- ""
- OnWorkflowCompletion
- OnWorkflowDeletion
- Never
type: string
type: object
artifactory:
properties:
passwordSecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
url:
type: string
usernameSecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
required:
- url
type: object
azure:
properties:
accountKeySecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
blob:
type: string
container:
type: string
endpoint:
type: string
useSDKCreds:
type: boolean
required:
- blob
- container
- endpoint
type: object
deleted:
type: boolean
from:
type: string
fromExpression:
type: string
gcs:
properties:
bucket:
type: string
key:
type: string
serviceAccountKeySecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
required:
- key
type: object
git:
properties:
branch:
type: string
depth:
format: int64
type: integer
disableSubmodules:
type: boolean
fetch:
items:
type: string
type: array
insecureIgnoreHostKey:
type: boolean
passwordSecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
repo:
type: string
revision:
type: string
singleBranch:
type: boolean
sshPrivateKeySecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
usernameSecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
required:
- repo
type: object
globalName:
type: string
hdfs:
properties:
addresses:
items:
type: string
type: array
force:
type: boolean
hdfsUser:
type: string
krbCCacheSecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
krbConfigConfigMap:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
krbKeytabSecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
krbRealm:
type: string
krbServicePrincipalName:
type: string
krbUsername:
type: string
path:
type: string
required:
- path
type: object
http:
properties:
auth:
properties:
basicAuth:
properties:
passwordSecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
usernameSecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
type: object
clientCert:
properties:
clientCertSecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
clientKeySecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
type: object
oauth2:
properties:
clientIDSecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
clientSecretSecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
endpointParams:
items:
properties:
key:
type: string
value:
type: string
required:
- key
type: object
type: array
scopes:
items:
type: string
type: array
tokenURLSecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
type: object
type: object
headers:
items:
properties:
name:
type: string
value:
type: string
required:
- name
- value
type: object
type: array
url:
type: string
required:
- url
type: object
mode:
format: int32
type: integer
name:
type: string
optional:
type: boolean
oss:
properties:
accessKeySecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
bucket:
type: string
createBucketIfNotPresent:
type: boolean
endpoint:
type: string
key:
type: string
lifecycleRule:
properties:
markDeletionAfterDays:
format: int32
type: integer
markInfrequentAccessAfterDays:
format: int32
type: integer
type: object
secretKeySecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
securityToken:
type: string
useSDKCreds:
type: boolean
required:
- key
type: object
path:
type: string
raw:
properties:
data:
type: string
required:
- data
type: object
recurseMode:
type: boolean
s3:
properties:
accessKeySecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
bucket:
type: string
caSecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
createBucketIfNotPresent:
properties:
objectLocking:
type: boolean
type: object
encryptionOptions:
properties:
enableEncryption:
type: boolean
kmsEncryptionContext:
type: string
kmsKeyId:
type: string
serverSideCustomerKeySecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
type: object
endpoint:
type: string
insecure:
type: boolean
key:
type: string
region:
type: string
roleARN:
type: string
secretKeySecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
useSDKCreds:
type: boolean
type: object
subPath:
type: string
required:
- name
type: object
type: array
exitCode:
type: string
parameters:
items:
properties:
default:
type: string
description:
type: string
enum:
items:
type: string
type: array
globalName:
type: string
name:
type: string
value:
type: string
valueFrom:
properties:
configMapKeyRef:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
default:
type: string
event:
type: string
expression:
type: string
jqFilter:
type: string
jsonPath:
type: string
parameter:
type: string
path:
type: string
supplied:
type: object
type: object
required:
- name
type: object
type: array
result:
type: string
type: object
phase:
type: string
progress:
type: string
required:
- metadata
type: object
served: true
storage: true

View File

@@ -1,43 +0,0 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: workflowtasksets.argoproj.io
annotations:
"helm.sh/resource-policy": keep
spec:
group: argoproj.io
names:
kind: WorkflowTaskSet
listKind: WorkflowTaskSetList
plural: workflowtasksets
shortNames:
- wfts
singular: workflowtaskset
scope: Namespaced
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
type: object
x-kubernetes-map-type: atomic
x-kubernetes-preserve-unknown-fields: true
status:
type: object
x-kubernetes-map-type: atomic
x-kubernetes-preserve-unknown-fields: true
required:
- metadata
- spec
type: object
served: true
storage: true
subresources:
status: {}

View File

@@ -1,37 +0,0 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: workflowtemplates.argoproj.io
annotations:
"helm.sh/resource-policy": keep
spec:
group: argoproj.io
names:
kind: WorkflowTemplate
listKind: WorkflowTemplateList
plural: workflowtemplates
shortNames:
- wftmpl
singular: workflowtemplate
scope: Namespaced
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
type: object
x-kubernetes-map-type: atomic
x-kubernetes-preserve-unknown-fields: true
required:
- metadata
- spec
type: object
served: true
storage: true

View File

@@ -1,67 +0,0 @@
{{- $namespace := printf "%s" "os-framework" -}}
{{- $rss_secret := (lookup "v1" "Secret" $namespace "rss-secrets") -}}
{{- $password := "" -}}
{{ if $rss_secret -}}
{{ $password = (index $rss_secret "data" "pg_password") }}
{{ else -}}
{{ $password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $redis_password := "" -}}
{{ if $rss_secret -}}
{{ $redis_password = (index $rss_secret "data" "redis_password") }}
{{ else -}}
{{ $redis_password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $redis_password_data := "" -}}
{{ $redis_password_data = $redis_password | b64dec }}
{{- $pg_password_data := "" -}}
{{ $pg_password_data = $password | b64dec }}
{{- $pg_user := printf "%s" "argo_os_framework" -}}
{{- $pg_user = $pg_user | b64enc -}}
---
apiVersion: v1
kind: Secret
metadata:
name: rss-secrets
namespace: {{ .Release.Namespace }}
type: Opaque
data:
pg_user: {{ $pg_user }}
pg_password: {{ $password }}
redis_password: {{ $redis_password }}
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: rss-pg
namespace: {{ .Release.Namespace }}
spec:
app: rss
appNamespace: {{ .Release.Namespace }}
middleware: postgres
postgreSQL:
user: argo_os_framework
password:
valueFrom:
secretKeyRef:
key: pg_password
name: rss-secrets
databases:
- name: rss
- name: rss_v1
- name: argo

View File

@@ -1,94 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: argoworkflows
labels:
helm.sh/chart: argoworkflows-0.35.0
app.kubernetes.io/name: argoworkflows-server
app.kubernetes.io/instance: rss
app.kubernetes.io/component: server
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/part-of: argo-workflows
rules:
- apiGroups:
- ""
resources:
- configmaps
- events
verbs:
- get
- watch
- list
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
- delete
- apiGroups:
- ""
resources:
- pods/log
verbs:
- get
- list
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- apiGroups:
- ""
resources:
- events
verbs:
- watch
- create
- patch
- apiGroups:
- argoproj.io
resources:
- eventsources
- sensors
- workflows
- workfloweventbindings
- workflowtemplates
- cronworkflows
verbs:
- create
- get
- list
- watch
- update
- patch
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: argoworkflows-cluster-template
labels:
helm.sh/chart: argoworkflows-0.35.0
app.kubernetes.io/name: argoworkflows-server
app.kubernetes.io/instance: rss
app.kubernetes.io/component: server
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/part-of: argo-workflows
rules:
- apiGroups:
- argoproj.io
resources:
- clusterworkflowtemplates
verbs:
- get
- list
- watch
- create
- update
- patch
- delete

View File

@@ -1,26 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ .Release.Namespace }}:argoworkflows
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: argoworkflows
subjects:
- kind: ServiceAccount
name: argoworkflows
namespace: {{ .Release.Namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ .Release.Namespace }}:argoworkflows-cluster-template
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: argoworkflows-cluster-template
subjects:
- kind: ServiceAccount
name: argoworkflows
namespace: {{ .Release.Namespace }}

View File

@@ -1,86 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: argoworkflows
namespace: {{ .Release.Namespace }}
labels:
app: argoworkflows
applications.app.bytetrade.io/author: bytetrade.io
app.kubernetes.io/managed-by: Helm
annotations:
applications.app.bytetrade.io/icon: https://argoproj.github.io/argo-workflows/assets/logo.png
applications.app.bytetrade.io/title: argoworkflows
applications.app.bytetrade.io/version: '0.35.0'
spec:
selector:
matchLabels:
app: argoworkflows
template:
metadata:
labels:
app: argoworkflows
spec:
serviceAccountName: argoworkflows
containers:
- name: argo-server
image: quay.io/argoproj/argocli:v3.5.0
imagePullPolicy: IfNotPresent
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
args:
- server
- --configmap=argoworkflow-workflow-controller-configmap
- "--auth-mode=server"
- "--secure=false"
- "--x-frame-options="
- "--loglevel"
- "debug"
- "--gloglevel"
- "0"
- "--log-format"
- "text"
ports:
- name: web
containerPort: 2746
readinessProbe:
httpGet:
path: /
port: 2746
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 20
env:
- name: IN_CLUSTER
value: "true"
- name: ARGO_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: BASE_HREF
value: /
volumeMounts:
- name: tmp
mountPath: /tmp
volumes:
- name: tmp
emptyDir: {}
nodeSelector:
kubernetes.io/os: linux
tolerations:
- key: node.kubernetes.io/not-ready
operator: Exists
effect: NoExecute
tolerationSeconds: 300
- key: node.kubernetes.io/unreachable
operator: Exists
effect: NoExecute
tolerationSeconds: 300

View File

@@ -1,6 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: argoworkflows
namespace: {{ .Release.Namespace }}

View File

@@ -1,16 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: argoworkflows-svc
namespace: {{ .Release.Namespace }}
spec:
ports:
- port: 2746
name: http
protocol: TCP
targetPort: 2746
selector:
app: argoworkflows
sessionAffinity: None
type: ClusterIP

View File

@@ -1,105 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: argoworkflow-view
labels:
helm.sh/chart: argoworkflows-0.35.0
app.kubernetes.io/name: argoworkflows-workflow-controller
app.kubernetes.io/instance: rss
app.kubernetes.io/component: workflow-controller
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/part-of: argo-workflows
rbac.authorization.k8s.io/aggregate-to-view: "true"
rules:
- apiGroups:
- argoproj.io
resources:
- workflows
- workflows/finalizers
- workfloweventbindings
- workfloweventbindings/finalizers
- workflowtemplates
- workflowtemplates/finalizers
- cronworkflows
- cronworkflows/finalizers
- clusterworkflowtemplates
- clusterworkflowtemplates/finalizers
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: argoworkflow-edit
labels:
helm.sh/chart: argoworkflows-0.35.0
app.kubernetes.io/name: argoworkflows-server
app.kubernetes.io/instance: rss
app.kubernetes.io/component: server
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/part-of: argo-workflows
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rules:
- apiGroups:
- argoproj.io
resources:
- workflows
- workflows/finalizers
- workfloweventbindings
- workfloweventbindings/finalizers
- workflowtemplates
- workflowtemplates/finalizers
- cronworkflows
- cronworkflows/finalizers
- clusterworkflowtemplates
- clusterworkflowtemplates/finalizers
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: argoworkflow-admin
labels:
helm.sh/chart: argoworkflows-0.35.0
app.kubernetes.io/name: argoworkflows-server
app.kubernetes.io/instance: rss
app.kubernetes.io/component: server
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/part-of: argo-workflows
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups:
- argoproj.io
resources:
- workflows
- workflows/finalizers
- workfloweventbindings
- workfloweventbindings/finalizers
- workflowtasksets
- workflowtasksets/finalizers
- workflowtemplates
- workflowtemplates/finalizers
- cronworkflows
- cronworkflows/finalizers
- clusterworkflowtemplates
- clusterworkflowtemplates/finalizers
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch

View File

@@ -1,178 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: argoworkflow-workflow-controller
labels:
helm.sh/chart: argoworkflows-0.35.0
app.kubernetes.io/name: argoworkflows-workflow-controller
app.kubernetes.io/instance: rss
app.kubernetes.io/component: workflow-controller
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/part-of: argo-workflows
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- create
- get
- list
- watch
- update
- patch
- delete
- apiGroups:
- ""
resources:
- pods/exec
verbs:
- create
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- get
- list
- watch
- update
- patch
- delete
- apiGroups:
- ""
resources:
- persistentvolumeclaims
- persistentvolumeclaims/finalizers
verbs:
- create
- update
- delete
- get
- apiGroups:
- argoproj.io
resources:
- workflows
- workflows/finalizers
- workflowtasksets
- workflowtasksets/finalizers
- workflowartifactgctasks
verbs:
- get
- list
- watch
- update
- patch
- delete
- create
- apiGroups:
- argoproj.io
resources:
- workflowtemplates
- workflowtemplates/finalizers
verbs:
- get
- list
- watch
- apiGroups:
- argoproj.io
resources:
- workflowtaskresults
- workflowtaskresults/finalizers
verbs:
- list
- watch
- deletecollection
- apiGroups:
- argoproj.io
resources:
- cronworkflows
- cronworkflows/finalizers
verbs:
- get
- list
- watch
- update
- patch
- delete
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- get
- list
- apiGroups:
- "policy"
resources:
- poddisruptionbudgets
verbs:
- create
- get
- delete
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- get
- list
- watch
- update
- patch
- delete
- apiGroups:
- coordination.k8s.io
resources:
- leases
resourceNames:
- workflow-controller
- workflow-controller-lease
verbs:
- create
- get
- list
- watch
- update
- patch
- delete
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
resourceNames:
- rss-secrets
- argo-workflows-agent-ca-certificates
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: argoworkflow-workflow-controller-cluster-template
labels:
helm.sh/chart: argoworkflows-0.35.0
app.kubernetes.io/name: argoworkflows-workflow-controller
app.kubernetes.io/instance: rss
app.kubernetes.io/component: workflow-controller
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/part-of: argo-workflows
rules:
- apiGroups:
- argoproj.io
resources:
- clusterworkflowtemplates
- clusterworkflowtemplates/finalizers
verbs:
- get
- list
- watch

View File

@@ -1,40 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: argoworkflow-workflow-controller-configmap
namespace: {{ .Release.Namespace }}
data:
config: |
instanceID: {{ .Release.Namespace }}
artifactRepository:
archiveLogs: true
s3:
accessKeySecret:
key: AWS_ACCESS_KEY_ID
name: argo-workflow-log-fakes3
secretKeySecret:
key: AWS_SECRET_ACCESS_KEY
name: argo-workflow-log-fakes3
bucket: mongo-backup
endpoint: tapr-s3-svc:4568
insecure: true
persistence:
connectionPool:
maxIdleConns: 5
maxOpenConns: 0
archive: true
archiveTTL: 5d
postgresql:
host: citus-headless.os-platform
port: 5432
database: os_framework_argo
tableName: argo_workflows
userNameSecret:
name: rss-secrets
key: pg_user
passwordSecret:
name: rss-secrets
key: pg_password
nodeEvents:
enabled: true

View File

@@ -1,27 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ .Release.Namespace }}:argoworkflow-workflow-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: argoworkflow-workflow-controller
subjects:
- kind: ServiceAccount
name: argoworkflow-workflow-controller
namespace: {{ .Release.Namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ .Release.Namespace }}:argoworkflow-workflow-controller-cluster-template
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: argoworkflow-workflow-controller-cluster-template
subjects:
- kind: ServiceAccount
name: argoworkflow-workflow-controller
namespace: {{ .Release.Namespace }}

View File

@@ -1,90 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: argoworkflow-workflow-controller
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/component: workflow-controller
applications.app.bytetrade.io/author: bytetrade.io
app.kubernetes.io/instance: argo
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: argoworkflows-workflow-controller
app.kubernetes.io/part-of: argo-workflows
app.kubernetes.io/version: v3.5.0
helm.sh/chart: argoworkflows-0.35.0
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: argo
app.kubernetes.io/name: argoworkflows-workflow-controller
template:
metadata:
labels:
app.kubernetes.io/component: workflow-controller
app.kubernetes.io/instance: argo
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: argoworkflows-workflow-controller
app.kubernetes.io/part-of: argo-workflows
app.kubernetes.io/version: v3.5.0
helm.sh/chart: argoworkflows-0.35.0
spec:
serviceAccountName: argoworkflow-workflow-controller
serviceAccount: argoworkflow-workflow-controller
schedulerName: default-scheduler
containers:
- name: controller
image: quay.io/argoproj/workflow-controller:v3.5.0
imagePullPolicy: IfNotPresent
command: [ "workflow-controller" ]
args:
- "--configmap"
- "argoworkflow-workflow-controller-configmap"
- "--executor-image"
- "quay.io/argoproj/argoexec:v3.5.0"
- "--loglevel"
- "debug"
- "--gloglevel"
- "0"
- "--log-format"
- "text"
securityContext:
capabilities:
drop:
- ALL
runAsNonRoot: true
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
env:
- name: ARGO_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: LEADER_ELECTION_IDENTITY
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
ports:
- name: metrics
containerPort: 9090
protocol: TCP
- containerPort: 6060
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 6060
scheme: HTTP
initialDelaySeconds: 90
timeoutSeconds: 30
periodSeconds: 60
successThreshold: 1
failureThreshold: 3
nodeSelector:
kubernetes.io/os: linux

View File

@@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: argoworkflow-workflow-controller
namespace: {{ .Release.Namespace }}

Some files were not shown because too many files have changed in this diff Show More