Compare commits

...

39 Commits

Author SHA1 Message Date
eball
933a093ce4 fix: admin user not found 2025-06-30 14:50:26 +08:00
eball
188028c186 feat: refresh user expiring certs 2025-06-30 14:13:44 +08:00
berg
27d9715292 market: multi user multi source (#1490)
* multi user & multi source & pre-render and collect image download progress & custom render variants

* support GlobalEnvs

* feat: release system-frontend: v1.3.88

* feat: app-service, studio-server

* feat: update market backend version

---------

Co-authored-by: Sai <kldtks@live.com>
Co-authored-by: hys <hysyeah@gmail.com>
2025-06-28 16:46:44 +08:00
salt
10d6c2a6fa fix: 1. fix: like 'why-olares.md', if input 'why', 'olares', search w… (#1491)
fix: 1. fix: like 'why-olares.md', if input 'why', 'olares', search without result 2.when generate_monitor_folder_path_list for convert_from_physical_path_to_frontend_resource_uri not propagate error

Co-authored-by: ubuntu <you@example.com>
2025-06-28 16:46:10 +08:00
eball
57d8a55d8d authelia: add user list api (#1489) 2025-06-27 22:07:27 +08:00
dkeven
b9a227acd7 fix(manifest): update the missed reverse proxy image version (#1488) 2025-06-27 11:27:07 +08:00
wiy
e6115794ce feat(system-frontend): update system-frontend new version to v1.3.86 (#1487) 2025-06-27 11:24:02 +08:00
dkeven
22739c90db fix(manifest): add missing app author label to argo deploy (#1486) 2025-06-27 11:23:29 +08:00
dkeven
6fac46130a perf(gpu): use our fork of dcgm-exporter with lower memory consumption (#1485) 2025-06-27 11:23:07 +08:00
simon
e19e049e7d feat(knowledge): add youtube feed and optimize the file name for aria2 download (#1481)
knowledge v0.12.12
2025-06-26 15:53:40 +08:00
wiy
1d0c20d6ad fix(system-frontend): copy nginx address error (#1484) 2025-06-26 15:16:18 +08:00
dkeven
397590d402 fix(cli): set health host of felix to lo addr explicitly (#1483) 2025-06-26 15:15:53 +08:00
hysyeah
fc1a59b79b ks,cli: remove host_ip label from some metric (#1482)
ks,cli: remove host_ip label from metric
2025-06-26 00:05:10 +08:00
eball
3dea149790 olaresd: network interface api modifed and nvstream mdns bug fix (#1480) 2025-06-26 00:04:10 +08:00
0x7fffff92
9d6834faa1 feat(tailscale): let tailscale run on the node where headscale is run… (#1479)
feat(tailscale): let tailscale run on the node where headscale is running

Co-authored-by: 0x7fffff92 <0x7fffff92@example.com>
2025-06-26 00:03:51 +08:00
dkeven
bef61309a3 feat(cli): set explicit image gc policy when installing K8s (#1478) 2025-06-26 00:03:04 +08:00
salt
cf52a59ef7 feat: search3 support multiple node for cache and external, run as daemonset (#1477)
* feat: search3 support multiple node for cache and external, and search3monitor run in daemon set

* fix: fix search3 iniialization fail because of not exist table __diesel_schema_migrations

---------

Co-authored-by: ubuntu <you@example.com>
2025-06-26 00:02:36 +08:00
wiy
80023be159 feat(system-frontend): merge system apps main (#1476)
* feat(system-frontend): merge apps into one image

* fix(system-frontend): update image version to v1.3.85

---------

Co-authored-by: yyh <24493052+yongheng2016@users.noreply.github.com>
2025-06-26 00:02:03 +08:00
eball
ae3e4e6bb9 gpu: refactor gpu scheduler with cpp (#1475) 2025-06-24 23:29:13 +08:00
dkeven
8c9e4d532b fix(daemon): upgrade runc dependency to fix vulnerability (#1473) 2025-06-24 21:33:43 +08:00
eball
3c48afb5b5 olares: move gpu package (#1474)
* olares: move gpu package

* fix: hami webui image
2025-06-24 21:32:37 +08:00
dkeven
3d22a01eef fix(cli): do not wait for recreation of pods without owner when changing ip (#1472) 2025-06-23 23:26:41 +08:00
eball
d6263bacca authelia: remove httponly option from set-cookie (#1471) 2025-06-23 23:25:55 +08:00
hysyeah
3b070ea095 node-exporter: add pcie_version,sata_version label for disk metric (#1470)
node-exporter: add pcie_version,sata_version label for node_disk_smartctl_info metric
2025-06-23 23:25:19 +08:00
dkeven
82b715635b feat: build and use hami-webui images using our own repo (#1469) 2025-06-23 23:24:38 +08:00
Peng Peng
1d4494c8d7 feat(user-service, notification, analytics): put prisma library under node_moudles in dockers (#1468)
feat: add prisma dependency to the docker
2025-06-23 11:22:31 +08:00
simon
56f5c07229 feat(knowledge): add ebook , pdf download and article extractor (#1467)
knowledge v0.12.11
2025-06-21 02:08:19 +08:00
berg
697ac440c7 wise, studio, desktop, dashboard: update system frontend version to v1.3.82 (#1466)
feat: update system frontend version to v1.3.82
2025-06-21 02:07:58 +08:00
eball
f0edbc08a6 gpu: bump libvgpu.so version (#1465) 2025-06-20 20:31:41 +08:00
eball
001607e840 authelia: add SameSite option to set-cookie (#1464) 2025-06-20 20:31:23 +08:00
dkeven
e8f525daca refactor(daemon): new scheme for upgrade APIs and operations (#1463) 2025-06-20 20:30:46 +08:00
salt
6d6f7705c9 feat: return search3 result with standard resource_urri (#1462)
* fix: fix search3 escape error

* feat: for search return resource_uri with standard mode

---------

Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
2025-06-20 11:18:01 +08:00
wiy
46b7fa0079 feat(system-frontend): update desktop files search; update dashboard chart components; (#1461) 2025-06-20 00:27:06 +08:00
hysyeah
793a62396b lldap,system-server: pub event async; chanage secret ns (#1460)
lldap,system-server: pub event async
2025-06-20 00:26:44 +08:00
eball
7cb4975f5b authelia: replace http session with lldap jwt (#1459)
* authelia: replace http session with lldap jwt

* fix: remove check auth

* fix: set default configuration

* fix: revert pg and nats configuration
2025-06-20 00:26:12 +08:00
eball
bfaf647ad1 tapr, cli:add extension vchord to pg and decrease k3s image fs threshold (#1458)
* tapr, cli:add extension vchord to pg and decrease k3s image fs threshold

* fix: image tag
2025-06-19 23:18:56 +08:00
hysyeah
23d3dc58ed lldap,tapr: add totp api (#1456) 2025-06-19 00:20:18 +08:00
yyh
7bf07f36b7 feat(system-frontend): update dashboard, control hub, and settings image (#1455)
* feat(system-frontend): update dashboard, control hub, and settings images to v1.3.80

* feat(ks_server): add environment variables for NODE_IP and TERMINUSD_HOST
2025-06-19 00:19:17 +08:00
eball
7e7117fc3a cli, daemon: persist the user name to the Olares release file (#1454) 2025-06-19 00:18:38 +08:00
136 changed files with 1946 additions and 2643 deletions

View File

@@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -1,26 +0,0 @@
apiVersion: v2
name: appstore
description: A Helm chart for Kubernetes
maintainers:
- name: bytetrade
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.0.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"

View File

@@ -1,62 +0,0 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "appstore.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "appstore.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "appstore.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "appstore.labels" -}}
helm.sh/chart: {{ include "appstore.chart" . }}
{{ include "appstore.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "appstore.selectorLabels" -}}
app.kubernetes.io/name: {{ include "appstore.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "appstore.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "appstore.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@@ -1,353 +0,0 @@
{{- $market_secret := (lookup "v1" "Secret" .Release.Namespace "market-secrets") -}}
{{- $redis_password := "" -}}
{{ if $market_secret -}}
{{ $redis_password = (index $market_secret "data" "redis-passwords") }}
{{ else -}}
{{ $redis_password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $market_backend_nats_secret := (lookup "v1" "Secret" .Release.Namespace "market-backend-nats-secret") -}}
{{- $nats_password := "" -}}
{{ if $market_backend_nats_secret -}}
{{ $nats_password = (index $market_backend_nats_secret "data" "nats_password") }}
{{ else -}}
{{ $nats_password = randAlphaNum 16 | b64enc }}
{{- end -}}
---
apiVersion: v1
kind: Secret
metadata:
name: market-backend-nats-secret
namespace: {{ .Release.Namespace }}
type: Opaque
data:
nats_password: {{ $nats_password }}
---
apiVersion: v1
kind: Secret
metadata:
name: market-secrets
namespace: {{ .Release.Namespace }}
type: Opaque
data:
redis-passwords: {{ $redis_password }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: market-deployment
namespace: {{ .Release.Namespace }}
labels:
app: appstore
applications.app.bytetrade.io/author: bytetrade.io
spec:
replicas: 1
selector:
matchLabels:
app: appstore
template:
metadata:
labels:
app: appstore
io.bytetrade.app: "true"
annotations:
instrumentation.opentelemetry.io/inject-go: "olares-instrumentation"
instrumentation.opentelemetry.io/go-container-names: "appstore-backend"
instrumentation.opentelemetry.io/otel-go-auto-target-exe: "/opt/app/market"
spec:
priorityClassName: "system-cluster-critical"
initContainers:
- args:
- -it
- authelia-backend.os-framework:9091
image: owncloudci/wait-for:latest
imagePullPolicy: IfNotPresent
name: check-auth
- name: terminus-sidecar-init
image: openservicemesh/init:v1.2.3
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
capabilities:
add:
- NET_ADMIN
runAsNonRoot: false
runAsUser: 0
command:
- /bin/sh
- -c
- |
iptables-restore --noflush <<EOF
# sidecar interception rules
*nat
:PROXY_IN_REDIRECT - [0:0]
:PROXY_INBOUND - [0:0]
-A PROXY_IN_REDIRECT -p tcp -j REDIRECT --to-port 15003
-A PROXY_INBOUND -p tcp --dport 15000 -j RETURN
-A PROXY_INBOUND -p tcp -j PROXY_IN_REDIRECT
-A PREROUTING -p tcp -j PROXY_INBOUND
COMMIT
EOF
env:
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
containers:
- name: appstore-backend
image: beclab/market-backend:v0.3.12
imagePullPolicy: IfNotPresent
ports:
- containerPort: 81
env:
- name: OS_SYSTEM_SERVER
value: system-server.user-system-{{ .Values.bfl.username }}
- name: OS_APP_SECRET
value: '{{ .Values.os.appstore.appSecret }}'
- name: OS_APP_KEY
value: {{ .Values.os.appstore.appKey }}
- name: APP_SOTRE_SERVICE_SERVICE_HOST
value: appstore-server-prod.bttcdn.com
- name: MARKET_PROVIDER
value: '{{ .Values.os.appstore.marketProvider }}'
- name: APP_SOTRE_SERVICE_SERVICE_PORT
value: '443'
- name: APP_SERVICE_SERVICE_HOST
value: app-service.os-framework
- name: APP_SERVICE_SERVICE_PORT
value: '6755'
- name: REPO_URL_PORT
value: "82"
- name: REDIS_ADDRESS
value: 'redis-cluster-proxy.user-system-{{ .Values.bfl.username }}:6379'
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: market-secrets
key: redis-passwords
- name: REDIS_DB_NUMBER
value: '0'
- name: REPO_URL_HOST
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NATS_HOST
value: nats.user-system-{{ .Values.bfl.username }}
- name: NATS_PORT
value: '4222'
- name: NATS_USERNAME
value: os-market-backend
- name: NATS_PASSWORD
valueFrom:
secretKeyRef:
name: market-backend-nats-secret
key: nats_password
- name: NATS_SUBJECT_USER_APPLICATION
value: terminus.user.application.{{ .Values.bfl.username}}
volumeMounts:
- name: opt-data
mountPath: /opt/app/data
- name: terminus-envoy-sidecar
image: bytetrade/envoy:v1.25.11
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
ports:
- name: proxy-admin
containerPort: 15000
- name: proxy-inbound
containerPort: 15003
volumeMounts:
- name: terminus-sidecar-config
readOnly: true
mountPath: /etc/envoy/envoy.yaml
subPath: envoy.yaml
command:
- /usr/local/bin/envoy
- --log-level
- debug
- -c
- /etc/envoy/envoy.yaml
env:
- name: POD_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: terminus-ws-sidecar
image: 'beclab/ws-gateway:v1.0.5'
command:
- /ws-gateway
env:
- name: WS_PORT
value: '81'
- name: WS_URL
value: /app-store/v1/websocket/message
resources: { }
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
volumes:
- name: terminus-sidecar-config
configMap:
name: sidecar-ws-configs
items:
- key: envoy.yaml
path: envoy.yaml
- name: opt-data
hostPath:
path: '{{ .Values.userspace.appData}}/appstore/data'
type: DirectoryOrCreate
- name: app
emptyDir: {}
- name: nginx-confd
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: appstore-service
namespace: {{ .Release.Namespace }}
spec:
selector:
app: appstore
type: ClusterIP
ports:
- protocol: TCP
name: appstore-backend
port: 81
targetPort: 81
---
apiVersion: sys.bytetrade.io/v1alpha1
kind: ApplicationPermission
metadata:
name: appstore
namespace: user-system-{{ .Values.bfl.username }}
spec:
app: appstore
appid: appstore
key: {{ .Values.os.appstore.appKey }}
secret: {{ .Values.os.appstore.appSecret }}
permissions:
- dataType: event
group: message-disptahcer.system-server
ops:
- Create
version: v1
- dataType: app
group: service.bfl
ops:
- UserApps
version: v1
status:
state: active
---
apiVersion: sys.bytetrade.io/v1alpha1
kind: ProviderRegistry
metadata:
name: appstore-backend-provider
namespace: user-system-{{ .Values.bfl.username }}
spec:
dataType: app
deployment: market
description: app store provider
endpoint: appstore-service.{{ .Release.Namespace }}:81
group: service.appstore
kind: provider
namespace: {{ .Release.Namespace }}
opApis:
- name: InstallDevApp
uri: /app-store/v1/applications/provider/installdev
- name: UninstallDevApp
uri: /app-store/v1/applications/provider/uninstalldev
version: v1
status:
state: active
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: market-redis
namespace: {{ .Release.Namespace }}
spec:
app: market
appNamespace: {{ .Release.Namespace }}
middleware: redis
redis:
password:
valueFrom:
secretKeyRef:
key: redis-passwords
name: market-secrets
namespace: market
---
apiVersion: v1
kind: Service
metadata:
name: appstore-svc
namespace: {{ .Release.Namespace }}
spec:
type: ClusterIP
selector:
app: appstore
ports:
- name: "appstore-backend"
protocol: TCP
port: 81
targetPort: 81
- name: "appstore-websocket"
protocol: TCP
port: 40010
targetPort: 40010
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: market-backend-nats
namespace: {{ .Release.Namespace }}
spec:
app: market-backend
appNamespace: os
middleware: nats
nats:
password:
valueFrom:
secretKeyRef:
key: nats_password
name: market-backend-nats-secret
refs:
- appName: user-service
appNamespace: os
subjects:
- name: "application.*"
perm:
- pub
- sub
- appName: user-service
appNamespace: os
subjects:
- name: "market.*"
perm:
- pub
- sub
user: os-market-backend

View File

@@ -1,44 +0,0 @@
bfl:
nodeport: 30883
nodeport_ingress_http: 30083
nodeport_ingress_https: 30082
username: 'test'
url: 'test'
nodeName: test
pvc:
userspace: test
userspace:
userData: test/Home
appData: test/Data
appCache: test
dbdata: test
docs:
nodeport: 30881
desktop:
nodeport: 30180
os:
portfolio:
appKey: '${ks[0]}'
appSecret: test
vault:
appKey: '${ks[0]}'
appSecret: test
desktop:
appKey: '${ks[0]}'
appSecret: test
message:
appKey: '${ks[0]}'
appSecret: test
rss:
appKey: '${ks[0]}'
appSecret: test
search:
appKey: '${ks[0]}'
appSecret: test
search2:
appKey: '${ks[0]}'
appSecret: test
appstore:
marketProvider: ''
kubesphere:
redis_password: ""

View File

@@ -188,7 +188,7 @@ spec:
containers:
- name: studio
image: beclab/studio-server:v0.1.51
image: beclab/studio-server:v0.1.52
imagePullPolicy: IfNotPresent
args:
- server
@@ -206,6 +206,8 @@ spec:
mountPath: /data
- mountPath: /etc/certs
name: certs
- mountPath: /storage
name: storage-volume
lifecycle:
preStop:
exec:

View File

@@ -74,6 +74,6 @@ echo "packaging launcher ..."
run_cmd "cp -rf framework/bfl/.olares/config/launcher ${DIST}/wizard/config/"
echo "packaging gpu ..."
run_cmd "cp -rf framework/gpu/.olares/config/gpu ${DIST}/wizard/config/"
run_cmd "cp -rf infrastructure/gpu/.olares/config/gpu ${DIST}/wizard/config/"
echo "packaging completed"

View File

@@ -265,7 +265,7 @@ const (
CacheAppServicePod = "app_service_pod_name"
CacheAppValues = "app_built_in_values"
CacheCountPodsUsingHostIP = "count_pods_using_host_ip"
CacheCountPodsWaitForRecreation = "count_pods_wait_for_recreation"
CacheUpgradeUsers = "upgrade_users"
CacheUpgradeAdminUser = "upgrade_admin_user"

View File

@@ -322,10 +322,26 @@ func (a *Argument) SaveReleaseInfo() error {
if a.OlaresVersion == "" {
return errors.New("invalid: empty olares version")
}
releaseInfoMap := map[string]string{
ENV_OLARES_BASE_DIR: a.BaseDir,
ENV_OLARES_VERSION: a.OlaresVersion,
}
if a.User != nil {
releaseInfoMap["OLARES_NAME"] = fmt.Sprintf("%s@%s", a.User.UserName, a.User.DomainName)
} else {
if util.IsExist(OlaresReleaseFile) {
// if the user is not set, try to load the user name from the release file
envs, err := godotenv.Read(OlaresReleaseFile)
if err == nil {
if userName, ok := envs["OLARES_NAME"]; ok {
releaseInfoMap["OLARES_NAME"] = userName
}
}
}
}
if !util.IsExist(filepath.Dir(OlaresReleaseFile)) {
if err := os.MkdirAll(filepath.Dir(OlaresReleaseFile), 0755); err != nil {
return fmt.Errorf("failed to create directory %s: %v", filepath.Dir(OlaresReleaseFile), err)

View File

@@ -195,11 +195,13 @@ func (g *GenerateK3sService) Execute(runtime connector.Runtime) error {
defaultKubeletArs := map[string]string{
"kube-reserved": "cpu=200m,memory=250Mi,ephemeral-storage=1Gi",
"system-reserved": "cpu=200m,memory=250Mi,ephemeral-storage=1Gi",
"eviction-hard": "memory.available<5%,nodefs.available<10%",
"eviction-hard": "memory.available<5%,nodefs.available<10%,imagefs.available<10%",
"config": "/etc/rancher/k3s/kubelet.config",
"containerd": container.DefaultContainerdCRISocket,
"cgroup-driver": "systemd",
"runtime-request-timeout": "5m",
"image-gc-high-threshold": "91",
"image-gc-low-threshold": "90",
}
defaultKubeProxyArgs := map[string]string{
"proxy-mode": "ipvs",

View File

@@ -307,6 +307,8 @@ func GetKubeletConfiguration(runtime connector.Runtime, kubeConf *common.KubeCon
"evictionPressureTransitionPeriod": "30s",
"featureGates": FeatureGatesDefaultConfiguration,
"runtimeRequestTimeout": "5m",
"imageGCHighThresholdPercent": 91,
"imageGCLowThresholdPercent": 90,
}
if securityEnhancement {

File diff suppressed because one or more lines are too long

View File

@@ -32,7 +32,7 @@ spec:
- command:
- ks-apiserver
- --logtostderr=true
image: beclab/ks-apiserver:0.0.20
image: beclab/ks-apiserver:0.0.21
imagePullPolicy: {{ .Values.image.pullPolicy }}
name: ks-apiserver
ports:

View File

@@ -35,7 +35,7 @@ spec:
- controller-manager
- --logtostderr=true
- --leader-elect=false
image: beclab/ks-controller-manager:0.0.20
image: beclab/ks-controller-manager:0.0.21
imagePullPolicy: {{ .Values.image.pullPolicy }}
name: ks-controller-manager
ports:

View File

@@ -748,12 +748,12 @@ spec:
sum (node_cpu_seconds_total{job="node-exporter", mode=~"user|nice|system|iowait|irq|softirq"}) by (cpu, instance, job, namespace, pod)
record: node_cpu_used_seconds_total
- expr: |
max(kube_pod_info{job="kube-state-metrics"} * on(node) group_left(role) kube_node_role{job="kube-state-metrics", role="master"} or on(pod, namespace) kube_pod_info{job="kube-state-metrics"}) by (node, namespace, host_ip, role, pod)
max(kube_pod_info{job="kube-state-metrics"} * on(node) group_left(role) kube_node_role{job="kube-state-metrics", role="master"} or on(pod, namespace) kube_pod_info{job="kube-state-metrics"}) by (node, namespace, role, pod)
record: 'node_namespace_pod:kube_pod_info:'
- expr: |
count by (node, host_ip, role) (sum by (node, cpu, host_ip, role) (
count by (node, role) (sum by (node, cpu, role) (
node_cpu_seconds_total{job="node-exporter"}
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:
))
record: node:node_num_cpu:sum
@@ -761,27 +761,27 @@ spec:
avg(irate(node_cpu_used_seconds_total{job="node-exporter"}[5m]))
record: :node_cpu_utilisation:avg1m
- expr: |
avg by (node, host_ip, role) (
avg by (node, role) (
irate(node_cpu_used_seconds_total{job="node-exporter"}[5m])
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:)
record: node:node_cpu_utilisation:avg1m
- expr: |
avg by (node, host_ip, role) (
avg by (node, role) (
irate(node_cpu_seconds_total{job="node-exporter",mode=~"user"}[5m])
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:)
record: node:node_user_cpu_utilisation:avg1m
- expr: |
avg by (node, host_ip, role) (
avg by (node, role) (
irate(node_cpu_seconds_total{job="node-exporter",mode=~"system"}[5m])
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:)
record: node:node_system_cpu_utilisation:avg1m
- expr: |
avg by (node, host_ip, role) (
avg by (node, role) (
irate(node_cpu_seconds_total{job="node-exporter",mode=~"iowait"}[5m])
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:)
record: node:node_iowait_cpu_utilisation:avg1m
- expr: |
@@ -806,9 +806,9 @@ spec:
label_replace(node_memory_Cached_bytes, "node", "$1", "instance", "(.*)")
record: node:node_memory_Cached_bytes
- expr: |
sum by (node, host_ip, role) (
sum by (node, role) (
(node_memory_Slab_bytes{job="node-exporter"} + node_memory_KernelStack_bytes{job="node-exporter"} + node_memory_PageTables_bytes{job="node-exporter"}+ node_memory_HardwareCorrupted_bytes{job="node-exporter"}+node_memory_Bounce_bytes{job="node-exporter"}-node_memory_SReclaimable_bytes{job="node-exporter"})
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:
)
record: node:node_memory_system_reserved
@@ -825,16 +825,16 @@ spec:
sum(node_memory_MemTotal_bytes{job="node-exporter"})
record: ':node_memory_utilisation:'
- expr: |
sum by (node, host_ip, role) (
sum by (node, role) (
(node_memory_MemFree_bytes{job="node-exporter"} + node_memory_Cached_bytes{job="node-exporter"} + node_memory_Buffers_bytes{job="node-exporter"} + node_memory_SReclaimable_bytes{job="node-exporter"})
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:
)
record: node:node_memory_bytes_available:sum
- expr: |
sum by (node, host_ip, role) (
sum by (node, role) (
node_memory_MemTotal_bytes{job="node-exporter"}
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:
)
record: node:node_memory_bytes_total:sum
@@ -842,30 +842,30 @@ spec:
1 - (node:node_memory_bytes_available:sum / node:node_memory_bytes_total:sum)
record: 'node:node_memory_utilisation:'
- expr: |
sum by (node, host_ip, role) (
sum by (node, role) (
irate(node_disk_reads_completed_total{job="node-exporter"}[5m])
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:
)
record: node:data_volume_iops_reads:sum
- expr: |
sum by (node, host_ip, role) (
sum by (node, role) (
irate(node_disk_writes_completed_total{job="node-exporter"}[5m])
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:
)
record: node:data_volume_iops_writes:sum
- expr: |
sum by (node, host_ip, role) (
sum by (node, role) (
irate(node_disk_read_bytes_total{job="node-exporter"}[5m])
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:
)
record: node:data_volume_throughput_bytes_read:sum
- expr: |
sum by (node, host_ip, role) (
sum by (node, role) (
irate(node_disk_written_bytes_total{job="node-exporter"}[5m])
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:
)
record: node:data_volume_throughput_bytes_written:sum
@@ -874,74 +874,74 @@ spec:
sum(irate(node_network_transmit_bytes_total{job="node-exporter",device!~"veth.+"}[5m]))
record: :node_net_utilisation:sum_irate
- expr: |
sum by (node, host_ip, role) (
sum by (node, role) (
(irate(node_network_receive_bytes_total{job="node-exporter",device!~"veth.+"}[5m]) +
irate(node_network_transmit_bytes_total{job="node-exporter",device!~"veth.+"}[5m]))
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:
)
record: node:node_net_utilisation:sum_irate
- expr: |
sum by (node, host_ip, role) (
sum by (node, role) (
irate(node_network_transmit_bytes_total{job="node-exporter",device!~"veth.+"}[5m])
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:
)
record: node:node_net_bytes_transmitted:sum_irate
- expr: |
sum by (node, host_ip, role) (
sum by (node, role) (
irate(node_network_receive_bytes_total{job="node-exporter",device!~"veth.+"}[5m])
* on (namespace, pod) group_left(node, host_ip, role)
* on (namespace, pod) group_left(node, role)
node_namespace_pod:kube_pod_info:
)
record: node:node_net_bytes_received:sum_irate
- expr: |
sum by(node, host_ip, role) (sum(max(node_filesystem_files{device=~"/dev/.*", device!~"/dev/loop\\d+", job="node-exporter"}) by (device, pod, namespace)) by (pod, namespace) * on (namespace, pod) group_left(node, host_ip, role) node_namespace_pod:kube_pod_info:)
sum by(node, role) (sum(max(node_filesystem_files{device=~"/dev/.*", device!~"/dev/loop\\d+", job="node-exporter"}) by (device, pod, namespace)) by (pod, namespace) * on (namespace, pod) group_left(node, role) node_namespace_pod:kube_pod_info:)
record: 'node:node_inodes_total:'
- expr: |
sum by(node, host_ip, role) (sum(max(node_filesystem_files_free{device=~"/dev/.*", device!~"/dev/loop\\d+", job="node-exporter"}) by (device, pod, namespace)) by (pod, namespace) * on (namespace, pod) group_left(node, host_ip, role) node_namespace_pod:kube_pod_info:)
sum by(node, role) (sum(max(node_filesystem_files_free{device=~"/dev/.*", device!~"/dev/loop\\d+", job="node-exporter"}) by (device, pod, namespace)) by (pod, namespace) * on (namespace, pod) group_left(node, role) node_namespace_pod:kube_pod_info:)
record: 'node:node_inodes_free:'
- expr: |
sum by (node, host_ip, role) (node_load1{job="node-exporter"} * on (namespace, pod) group_left(node, host_ip, role) node_namespace_pod:kube_pod_info:) / node:node_num_cpu:sum
sum by (node, role) (node_load1{job="node-exporter"} * on (namespace, pod) group_left(node, role) node_namespace_pod:kube_pod_info:) / node:node_num_cpu:sum
record: node:load1:ratio
- expr: |
sum by (node, host_ip, role) (node_load5{job="node-exporter"} * on (namespace, pod) group_left(node, host_ip, role) node_namespace_pod:kube_pod_info:) / node:node_num_cpu:sum
sum by (node, role) (node_load5{job="node-exporter"} * on (namespace, pod) group_left(node, role) node_namespace_pod:kube_pod_info:) / node:node_num_cpu:sum
record: node:load5:ratio
- expr: |
sum by (node, host_ip, role) (node_load15{job="node-exporter"} * on (namespace, pod) group_left(node, host_ip, role) node_namespace_pod:kube_pod_info:) / node:node_num_cpu:sum
sum by (node, role) (node_load15{job="node-exporter"} * on (namespace, pod) group_left(node, role) node_namespace_pod:kube_pod_info:) / node:node_num_cpu:sum
record: node:load15:ratio
- expr: |
sum by (node, host_ip, role) ((kube_pod_status_scheduled{job="kube-state-metrics", condition="true"} > 0) * on (namespace, pod) group_left(node, host_ip, role) node_namespace_pod:kube_pod_info:)
sum by (node, role) ((kube_pod_status_scheduled{job="kube-state-metrics", condition="true"} > 0) * on (namespace, pod) group_left(node, role) node_namespace_pod:kube_pod_info:)
record: node:pod_count:sum
- expr: |
(sum(kube_node_status_capacity{resource="pods", job="kube-state-metrics"}) by (node) * on(node) group_left(host_ip, role) max by(node, host_ip, role) (node_namespace_pod:kube_pod_info:{node!="",host_ip!=""}))
(sum(kube_node_status_capacity{resource="pods", job="kube-state-metrics"}) by (node) * on(node) group_left(role) max by(node, role) (node_namespace_pod:kube_pod_info:{node!=""}))
record: node:pod_capacity:sum
- expr: |
node:pod_running:count / node:pod_capacity:sum
record: node:pod_utilization:ratio
- expr: |
count(node_namespace_pod:kube_pod_info: unless on (pod, namespace) (kube_pod_status_phase{job="kube-state-metrics", phase=~"Failed|Pending|Unknown|Succeeded"} > 0)) by (node, host_ip, role)
count(node_namespace_pod:kube_pod_info: unless on (pod, namespace) (kube_pod_status_phase{job="kube-state-metrics", phase=~"Failed|Pending|Unknown|Succeeded"} > 0)) by (node, role)
record: node:pod_running:count
- expr: |
count(node_namespace_pod:kube_pod_info: unless on (pod, namespace) (kube_pod_status_phase{job="kube-state-metrics", phase=~"Failed|Pending|Unknown|Running"} > 0)) by (node, host_ip, role)
count(node_namespace_pod:kube_pod_info: unless on (pod, namespace) (kube_pod_status_phase{job="kube-state-metrics", phase=~"Failed|Pending|Unknown|Running"} > 0)) by (node, role)
record: node:pod_succeeded:count
- expr: |
count(node_namespace_pod:kube_pod_info:{node!="",host_ip!=""} unless on (pod, namespace) (kube_pod_status_phase{job="kube-state-metrics", phase="Succeeded"}>0) unless on (pod, namespace) ((kube_pod_status_ready{job="kube-state-metrics", condition="true"}>0) and on (pod, namespace) (kube_pod_status_phase{job="kube-state-metrics", phase="Running"}>0)) unless on (pod, namespace) kube_pod_container_status_waiting_reason{job="kube-state-metrics", reason="ContainerCreating"}>0) by (node, host_ip, role)
count(node_namespace_pod:kube_pod_info:{node!=""} unless on (pod, namespace) (kube_pod_status_phase{job="kube-state-metrics", phase="Succeeded"}>0) unless on (pod, namespace) ((kube_pod_status_ready{job="kube-state-metrics", condition="true"}>0) and on (pod, namespace) (kube_pod_status_phase{job="kube-state-metrics", phase="Running"}>0)) unless on (pod, namespace) kube_pod_container_status_waiting_reason{job="kube-state-metrics", reason="ContainerCreating"}>0) by (node, role)
record: node:pod_abnormal:count
- expr: |
(count by(namespace, cluster) (kube_pod_info{job="kube-state-metrics"} unless on(pod, namespace, cluster) (kube_pod_status_phase{job="kube-state-metrics",phase="Succeeded"} > 0) unless on(pod, namespace, cluster) ((kube_pod_status_ready{condition="true",job="kube-state-metrics"} > 0) and on(pod, namespace, cluster) (kube_pod_status_phase{job="kube-state-metrics",phase="Running"} > 0)) unless on(pod, namespace, cluster) kube_pod_container_status_waiting_reason{job="kube-state-metrics",reason="ContainerCreating"} > 0) or on(namespace, cluster) (group by(namespace, cluster) (kube_pod_info{job="kube-state-metrics"}) * 0)) * on(namespace, cluster) group_left(user) (kube_namespace_labels{job="kube-state-metrics"}) > 0
record: user:pod_abnormal:count
- expr: |
node:pod_abnormal:count / count(node_namespace_pod:kube_pod_info:{node!="",host_ip!=""} unless on (pod, namespace) kube_pod_status_phase{job="kube-state-metrics", phase="Succeeded"}>0) by (node, host_ip, role)
node:pod_abnormal:count / count(node_namespace_pod:kube_pod_info:{node!=""} unless on (pod, namespace) kube_pod_status_phase{job="kube-state-metrics", phase="Succeeded"}>0) by (node, role)
record: node:pod_abnormal:ratio
- expr: |
user:pod_abnormal:count / count(node_namespace_pod:kube_pod_info:{node!="",host_ip!=""} unless on (pod, namespace) kube_pod_status_phase{job="kube-state-metrics", phase="Succeeded"}>0) by (node, host_ip, role)
user:pod_abnormal:count / count(node_namespace_pod:kube_pod_info:{node!=""} unless on (pod, namespace) kube_pod_status_phase{job="kube-state-metrics", phase="Succeeded"}>0) by (node, role)
record: user:pod_abnormal:ratio
- expr: |
sum(max(node_filesystem_avail_bytes{device=~"/dev/.*", device!~"/dev/loop\\d+", job="node-exporter"} * on (namespace, pod) group_left(node, host_ip, role) node_namespace_pod:kube_pod_info:) by (device, node, host_ip, role)) by (node, host_ip, role)
sum(max(node_filesystem_avail_bytes{device=~"/dev/.*", device!~"/dev/loop\\d+", job="node-exporter"} * on (namespace, pod) group_left(node, role) node_namespace_pod:kube_pod_info:) by (device, node, role)) by (node, role)
record: 'node:disk_space_available:'
- expr: |
1- sum(max(node_filesystem_avail_bytes{device=~"/dev/.*", device!~"/dev/loop\\d+", job="node-exporter"} * on (namespace, pod) group_left(node, host_ip, role) node_namespace_pod:kube_pod_info:) by (device, node, host_ip, role)) by (node, host_ip, role) / sum(max(node_filesystem_size_bytes{device=~"/dev/.*", device!~"/dev/loop\\d+", job="node-exporter"} * on (namespace, pod) group_left(node, host_ip, role) node_namespace_pod:kube_pod_info:) by (device, node, host_ip, role)) by (node, host_ip, role)
1- sum(max(node_filesystem_avail_bytes{device=~"/dev/.*", device!~"/dev/loop\\d+", job="node-exporter"} * on (namespace, pod) group_left(node, role) node_namespace_pod:kube_pod_info:) by (device, node, role)) by (node, role) / sum(max(node_filesystem_size_bytes{device=~"/dev/.*", device!~"/dev/loop\\d+", job="node-exporter"} * on (namespace, pod) group_left(node, role) node_namespace_pod:kube_pod_info:) by (device, node, role)) by (node, role)
record: node:disk_space_utilization:ratio
- expr: |
(1 - (node:node_inodes_free: / node:node_inodes_total:))

View File

@@ -42,7 +42,7 @@ spec:
- --collector.netdev.address-info
- --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
- --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
image: beclab/node-exporter:0.0.2
image: beclab/node-exporter:0.0.3
name: node-exporter
securityContext:
privileged: true

View File

@@ -5993,6 +5993,8 @@ spec:
# Enable or Disable VXLAN on the default IPv6 IP pool.
- name: CALICO_IPV6POOL_VXLAN
value: "Never"
- name: FELIX_HEALTHHOST
value: 127.0.0.1
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
valueFrom:

View File

@@ -117,7 +117,7 @@ func (m *Manager) packageLauncher() error {
func (m *Manager) packageGPU() error {
fmt.Println("packaging gpu ...")
return util.CopyDirectory(
filepath.Join(m.olaresRepoRoot, "framework/gpu/.olares/config/gpu"),
filepath.Join(m.olaresRepoRoot, "infrastructure/gpu/.olares/config/gpu"),
filepath.Join(m.distPath, "wizard/config/gpu"),
)
}

View File

@@ -86,6 +86,12 @@ func (t *InstallOsSystem) Execute(runtime connector.Runtime) error {
// TODO: wait for the platform to be ready
actionConfig, settings, err = utils.InitConfig(config, common.NamespaceOsFramework)
if err != nil {
return err
}
ctx, cancel = context.WithTimeout(context.Background(), 3*time.Minute)
defer cancel()
var frameworkPath = path.Join(runtime.GetInstallerDir(), "wizard", "config", "os-framework")
if err := utils.UpgradeCharts(ctx, actionConfig, settings, common.ChartNameOSFramework, frameworkPath, "", common.NamespaceOsFramework, vals, false); err != nil {
return err

View File

@@ -453,14 +453,21 @@ func (a *DeletePodsUsingHostIP) Execute(runtime connector.Runtime) error {
if err != nil {
return errors.Wrap(err, "failed to get pods using host IP")
}
a.PipelineCache.Set(common.CacheCountPodsUsingHostIP, len(targetPods))
var waitRecreationPodsCount int
for _, pod := range targetPods {
logger.Infof("restarting pod %s/%s that's using host IP", pod.Namespace, pod.Name)
err = kubeClient.CoreV1().Pods(pod.Namespace).Delete(context.Background(), pod.Name, metav1.DeleteOptions{})
if err != nil && !kerrors.IsNotFound(err) {
return errors.Wrap(err, "failed to delete pod")
}
// pods not created by any owner resource
// may not be recreated immediately and should not be waited
if len(pod.OwnerReferences) > 0 {
waitRecreationPodsCount++
}
}
a.PipelineCache.Set(common.CacheCountPodsWaitForRecreation, waitRecreationPodsCount)
// try our best to wait for the pods to be actually deleted
// to avoid the next module getting the pods with a still running phase
@@ -479,7 +486,7 @@ type WaitForPodsUsingHostIPRecreate struct {
}
func (a *WaitForPodsUsingHostIPRecreate) Execute(runtime connector.Runtime) error {
count, ok := a.PipelineCache.GetMustInt(common.CacheCountPodsUsingHostIP)
count, ok := a.PipelineCache.GetMustInt(common.CacheCountPodsWaitForRecreation)
if !ok {
return errors.New("failed to get the count of pods using host IP")
}

View File

@@ -197,7 +197,7 @@ func (u *UpgradeSystemComponents) Execute(runtime connector.Runtime) error {
return err
}
actionConfig, settings, err = utils.InitConfig(config, common.NamespaceOsPlatform)
actionConfig, settings, err = utils.InitConfig(config, common.NamespaceOsFramework)
if err != nil {
return err
}

View File

@@ -1,4 +1,6 @@
current_dir := $(dir $(abspath $(firstword $(MAKEFILE_LIST))))
.PHONY: all tidy fmt vet build
all: tidy build
@@ -17,3 +19,11 @@ build: fmt vet ;$(info $(M)...Begin to build terminusd.) @
build-linux: fmt vet ;$(info $(M)...Begin to build terminusd (linux version).) @
CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go build -o bin/olaresd cmd/terminusd/main.go
build-linux-in-docker:
docker run -it --platform linux/amd64 --rm \
-v $(current_dir):/olaresd \
-w /olaresd \
-e DEBIAN_FRONTEND=noninteractive \
golang:1.24 \
sh -c "apt-get -y update; apt-get -y install libudev-dev; make build-linux"

View File

@@ -14,6 +14,7 @@ import (
"github.com/beclab/Olares/daemon/internel/ble"
"github.com/beclab/Olares/daemon/internel/mdns"
"github.com/beclab/Olares/daemon/internel/watcher"
"github.com/beclab/Olares/daemon/internel/watcher/cert"
"github.com/beclab/Olares/daemon/internel/watcher/system"
"github.com/beclab/Olares/daemon/internel/watcher/upgrade"
"github.com/beclab/Olares/daemon/internel/watcher/usb"
@@ -96,6 +97,7 @@ func main() {
// usb.NewUsbWatcher(),
usb.NewUmountWatcher(),
upgrade.NewUpgradeWatcher(),
cert.NewCertWatcher(),
}, func() {
if s != nil {
if err := s.Restart(); err != nil {

View File

@@ -79,7 +79,6 @@ require (
github.com/containerd/platforms v0.2.1 // indirect
github.com/containerd/ttrpc v1.2.7 // indirect
github.com/containerd/typeurl/v2 v2.1.1 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.2.0 // indirect
github.com/distribution/reference v0.6.0 // indirect
@@ -123,7 +122,7 @@ require (
github.com/mitchellh/go-homedir v1.1.0 // indirect
github.com/moby/locker v1.0.1 // indirect
github.com/moby/spdystream v0.5.0 // indirect
github.com/moby/sys/mountinfo v0.7.1 // indirect
github.com/moby/sys/mountinfo v0.7.2 // indirect
github.com/moby/sys/sequential v0.5.0 // indirect
github.com/moby/sys/signal v0.7.0 // indirect
github.com/moby/sys/user v0.3.0 // indirect
@@ -134,9 +133,9 @@ require (
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.1.1 // indirect
github.com/opencontainers/runc v1.1.13 // indirect
github.com/opencontainers/runtime-spec v1.1.0 // indirect
github.com/opencontainers/selinux v1.11.0 // indirect
github.com/opencontainers/runc v1.3.0 // indirect
github.com/opencontainers/runtime-spec v1.2.1 // indirect
github.com/opencontainers/selinux v1.11.1 // indirect
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
github.com/prometheus/client_golang v1.22.0 // indirect
github.com/prometheus/client_model v0.6.1 // indirect

View File

@@ -63,8 +63,6 @@ github.com/containerd/ttrpc v1.2.7 h1:qIrroQvuOL9HQ1X6KHe2ohc7p+HP/0VE6XPU7elJRq
github.com/containerd/ttrpc v1.2.7/go.mod h1:YCXHsb32f+Sq5/72xHubdiJRQY9inL4a4ZQrAbN1q9o=
github.com/containerd/typeurl/v2 v2.1.1 h1:3Q4Pt7i8nYwy2KmQWIw2+1hTvwTE/6w9FqcttATPO/4=
github.com/containerd/typeurl/v2 v2.1.1/go.mod h1:IDp2JFvbwZ31H8dQbEIY7sDl2L3o3HZj1hsSQlywkQ0=
github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
@@ -120,7 +118,6 @@ github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ
github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
github.com/godbus/dbus/v5 v5.0.3/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/godbus/dbus/v5 v5.1.0 h1:4KLkAxT3aOY8Li4FRJe/KvhoNFFxo0m6fNuFUO8QJUk=
github.com/godbus/dbus/v5 v5.1.0/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gofiber/fiber/v2 v2.52.5 h1:tWoP1MJQjGEe4GB5TUGOi7P2E0ZMMRx5ZTG4rT+yGMo=
@@ -233,8 +230,8 @@ github.com/moby/locker v1.0.1 h1:fOXqR41zeveg4fFODix+1Ch4mj/gT0NE1XJbp/epuBg=
github.com/moby/locker v1.0.1/go.mod h1:S7SDdo5zpBK84bzzVlKr2V0hz+7x9hWbYC/kq7oQppc=
github.com/moby/spdystream v0.5.0 h1:7r0J1Si3QO/kjRitvSLVVFUjxMEb/YLj6S9FF62JBCU=
github.com/moby/spdystream v0.5.0/go.mod h1:xBAYlnt/ay+11ShkdFKNAG7LsyK/tmNBVvVOwrfMgdI=
github.com/moby/sys/mountinfo v0.7.1 h1:/tTvQaSJRr2FshkhXiIpux6fQ2Zvc4j7tAhMTStAG2g=
github.com/moby/sys/mountinfo v0.7.1/go.mod h1:IJb6JQeOklcdMU9F5xQ8ZALD+CUr5VlGpwtX+VE0rpI=
github.com/moby/sys/mountinfo v0.7.2 h1:1shs6aH5s4o5H2zQLn796ADW1wMrIwHsyJ2v9KouLrg=
github.com/moby/sys/mountinfo v0.7.2/go.mod h1:1YOa8w8Ih7uW0wALDUgT1dTTSBrZ+HiBLGws92L2RU4=
github.com/moby/sys/sequential v0.5.0 h1:OPvI35Lzn9K04PBbCLW0g4LcFAJgHsvXsRyewg5lXtc=
github.com/moby/sys/sequential v0.5.0/go.mod h1:tH2cOOs5V9MlPiXcQzRC+eEyab644PWKGRYaaV5ZZlo=
github.com/moby/sys/signal v0.7.0 h1:25RW3d5TnQEoKvRbEKUGay6DCQ46IxAVTT9CUMgmsSI=
@@ -273,12 +270,12 @@ github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040=
github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M=
github.com/opencontainers/runc v1.1.13 h1:98S2srgG9vw0zWcDpFMn5TRrh8kLxa/5OFUstuUhmRs=
github.com/opencontainers/runc v1.1.13/go.mod h1:R016aXacfp/gwQBYw2FDGa9m+n6atbLWrYY8hNMT/sA=
github.com/opencontainers/runtime-spec v1.1.0 h1:HHUyrt9mwHUjtasSbXSMvs4cyFxh+Bll4AjJ9odEGpg=
github.com/opencontainers/runtime-spec v1.1.0/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/selinux v1.11.0 h1:+5Zbo97w3Lbmb3PeqQtpmTkMwsW5nRI3YaLpt7tQ7oU=
github.com/opencontainers/selinux v1.11.0/go.mod h1:E5dMC3VPuVvVHDYmi78qvhJp8+M586T4DlDRYpFkyec=
github.com/opencontainers/runc v1.3.0 h1:cvP7xbEvD0QQAs0nZKLzkVog2OPZhI/V2w3WmTmUSXI=
github.com/opencontainers/runc v1.3.0/go.mod h1:9wbWt42gV+KRxKRVVugNP6D5+PQciRbenB4fLVsqGPs=
github.com/opencontainers/runtime-spec v1.2.1 h1:S4k4ryNgEpxW1dzyqffOmhI1BHYcjzU8lpJfSlR0xww=
github.com/opencontainers/runtime-spec v1.2.1/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/selinux v1.11.1 h1:nHFvthhM0qY8/m+vfhJylliSshm8G1jJ2jDMcgULaH8=
github.com/opencontainers/selinux v1.11.1/go.mod h1:E5dMC3VPuVvVHDYmi78qvhJp8+M586T4DlDRYpFkyec=
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58 h1:onHthvaw9LFnH4t2DcNVpwGmV9E1BkGknEliJkfwQj0=
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58/go.mod h1:DXv8WO4yhMYhSNPKjeNKa5WY9YCIEBRbNzFFPJbWO6Y=
github.com/pelletier/go-toml v1.9.5 h1:4yBQzkHv+7BHq2PQUZF3Mx0IYxG7LsP222s7Agd3ve8=
@@ -440,7 +437,6 @@ golang.org/x/sys v0.0.0-20210303074136-134d130e1a04/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210426080607-c94f62235c83/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20211025201205-69cdffdb9359/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220908164124-27713097b956/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=

View File

@@ -80,22 +80,32 @@ func (h *handlers) GetNetIfs(ctx *fiber.Ctx) error {
}
}
if test == "true" {
r.InternetConnected = ptr.To(utils.CheckInterfaceIPv4Connectivity(ctx.Context(), i.Iface.Name))
devices, err := utils.GetAllDevice(ctx.Context())
if err != nil {
klog.Error("get all devices error, ", err)
return h.ErrJSON(ctx, http.StatusServiceUnavailable, err.Error())
}
devices, err := utils.GetAllDevice(ctx.Context())
if err != nil {
klog.Error("get all devices error, ", err)
return h.ErrJSON(ctx, http.StatusServiceUnavailable, err.Error())
}
if d, ok := devices[r.Iface]; ok {
r.Ipv4Gateway = &d.Ipv4Gateway
r.Ipv6Gateway = &d.Ipv6Gateway
r.Ipv4DNS = &d.Ipv4DNS
r.Ipv6DNS = &d.Ipv6DNS
r.Ipv6Address = &d.Ipv6Address
r.Ipv4Mask = &d.Ipv4Mask
r.Method = &d.Method
if d, ok := devices[r.Iface]; ok {
r.Ipv4Gateway = &d.Ipv4Gateway
r.Ipv6Gateway = &d.Ipv6Gateway
r.Ipv4DNS = &d.Ipv4DNS
r.Ipv6DNS = &d.Ipv6DNS
r.Ipv6Address = &d.Ipv6Address
r.Ipv4Mask = &d.Ipv4Mask
r.Method = &d.Method
}
if rx, tx, err := utils.GetInterfaceTraffic(r.Iface); err == nil {
r.RxRate = ptr.To(rx)
r.TxRate = ptr.To(tx)
} else {
klog.Error("get interface rx/tx rate error, ", err)
}
if test == "true" {
if r.IP != "" {
r.InternetConnected = ptr.To(utils.CheckInterfaceIPv4Connectivity(ctx.Context(), i.Iface.Name))
}
if r.Ipv6Address != nil && *r.Ipv6Address != "" {
@@ -104,12 +114,6 @@ func (h *handlers) GetNetIfs(ctx *fiber.Ctx) error {
r.Ipv6Connectivity = &connected
}
if rx, tx, err := utils.GetInterfaceTraffic(r.Iface); err == nil {
r.RxRate = ptr.To(rx)
r.TxRate = ptr.To(tx)
} else {
klog.Error("get interface rx/tx rate error, ", err)
}
}
res = append(res, r)

View File

@@ -8,12 +8,14 @@ import (
"github.com/beclab/Olares/daemon/pkg/cluster/state"
"github.com/beclab/Olares/daemon/pkg/commands"
"github.com/beclab/Olares/daemon/pkg/commands/upgrade"
"github.com/gofiber/fiber/v2"
"k8s.io/klog/v2"
)
type UpgradeReq struct {
Version string `json:"version"`
Version string `json:"version"`
DownloadOnly bool `json:"downloadOnly,omitempty"` // false means download-and-upgrade
}
func (r *UpgradeReq) Check() error {
@@ -43,10 +45,18 @@ func (h *handlers) RequestOlaresUpgrade(ctx *fiber.Ctx, cmd commands.Interface)
return h.ErrJSON(ctx, http.StatusBadRequest, err.Error())
}
if _, err := cmd.Execute(ctx.Context(), req.Version); err != nil {
upgradeReq := upgrade.UpgradeRequest{
Version: req.Version,
DownloadOnly: req.DownloadOnly,
}
if _, err := cmd.Execute(ctx.Context(), upgradeReq); err != nil {
return h.ErrJSON(ctx, http.StatusBadRequest, err.Error())
}
if req.DownloadOnly {
return h.OkJSON(ctx, "successfully created download target")
}
return h.OkJSON(ctx, "successfully created upgrade target")
}
@@ -55,5 +65,5 @@ func (h *handlers) CancelOlaresUpgrade(ctx *fiber.Ctx, cmd commands.Interface) e
return h.ErrJSON(ctx, http.StatusBadRequest, err.Error())
}
return h.OkJSON(ctx, "successfully removed upgrade target")
return h.OkJSON(ctx, "successfully cancelled upgrade/download")
}

View File

@@ -50,10 +50,10 @@ func (s *server) Start() error {
cmd.Post("/upgrade", s.handlers.RequireSignature(
s.handlers.WaitServerRunning(
s.handlers.RunCommand(s.handlers.RequestOlaresUpgrade, upgrade.NewCreateTarget))))
s.handlers.RunCommand(s.handlers.RequestOlaresUpgrade, upgrade.NewCreateUpgradeTarget))))
cmd.Delete("/upgrade", s.handlers.RequireSignature(
s.handlers.RunCommand(s.handlers.CancelOlaresUpgrade, upgrade.NewRemoveTarget)))
s.handlers.RunCommand(s.handlers.CancelOlaresUpgrade, upgrade.NewRemoveUpgradeTarget)))
cmd.Post("/reboot", s.handlers.RequireSignature(
s.handlers.WaitServerRunning(

View File

@@ -43,6 +43,7 @@ func (s *server) Close() {
if s.server != nil {
klog.Info("mDNS server shutdown ")
s.server.Shutdown()
s.registeredIP = "" // clear the registered IP
}
}
@@ -88,11 +89,11 @@ func (s *server) Restart() error {
}
if s.registeredIP != ip {
s.registeredIP = ip
if s.server != nil {
s.Close()
}
s.registeredIP = ip
instanceName := s.name
if instanceName == "" {
instanceName = hostname

View File

@@ -0,0 +1,141 @@
package cert
import (
"context"
"fmt"
"time"
"github.com/beclab/Olares/daemon/internel/watcher"
"github.com/beclab/Olares/daemon/pkg/cluster/state"
"github.com/beclab/Olares/daemon/pkg/utils"
batchv1 "k8s.io/api/batch/v1"
corev1 "k8s.io/api/core/v1"
kubeErr "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/klog/v2"
"k8s.io/utils/ptr"
)
var _ watcher.Watcher = &userCertWatcher{}
type userCertWatcher struct {
}
func NewCertWatcher() *userCertWatcher {
return &userCertWatcher{}
}
// Watch implements watcher.Watcher.
func (u *userCertWatcher) Watch(ctx context.Context) {
if state.CurrentState.TerminusState != state.TerminusRunning {
return
}
kubeClient, err := utils.GetKubeClient()
if err != nil {
klog.Error("failed to get kube client, ", err)
return
}
dynamicClient, err := utils.GetDynamicClient()
if err != nil {
klog.Error("failed to get dynamic client, ", err)
return
}
users, err := utils.ListUsers(ctx, dynamicClient)
if err != nil {
klog.Error("failed to list users, ", err)
return
}
for _, user := range users {
namespace := fmt.Sprintf("user-space-%s", user.GetName())
config, err := kubeClient.CoreV1().ConfigMaps(namespace).Get(ctx, "zone-ssl-config", metav1.GetOptions{})
if err != nil {
klog.Error("failed to get user config map, ", err, ", namespace: ", namespace)
continue
}
if expired, ok := config.Data["expired_at"]; ok {
expiredTime, err := time.Parse("2006-01-02T15:04:05Z", expired)
if err != nil {
klog.Error("failed to parse expired_at, ", err)
continue
}
// Check if the certificate will expire within 10 days
if expiredTime.Before(time.Now().Add(10 * 24 * time.Hour)) {
klog.Info("user cert expired, ", user.GetName())
err = createOrUpdateJob(ctx, kubeClient, namespace)
if err != nil {
klog.Error("failed to create or update job for user cert, ", err, ", namespace: ", namespace)
} else {
klog.Info("job created for user cert download, ", user.GetName(), ", namespace: ", namespace)
}
}
}
}
}
func createOrUpdateJob(ctx context.Context, kubeClient kubernetes.Interface, namespace string) error {
currentJob, err := kubeClient.BatchV1().Jobs(namespace).Get(ctx, jobDownloadUserCert.Name, metav1.GetOptions{})
if err != nil {
if kubeErr.IsNotFound(err) {
// Create the job if it does not exist
} else {
return fmt.Errorf("failed to get job: %w", err)
}
} else {
// check the existing job
if currentJob.Status.Succeeded == 0 || currentJob.Status.Failed > 0 {
klog.Info("job is still running, skip creating a new one")
return nil
}
// If the job exists and has completed, delete it before creating a new one
klog.Info("delete existing job: ", currentJob.Name)
err = kubeClient.BatchV1().Jobs(namespace).Delete(ctx, currentJob.Name, metav1.DeleteOptions{})
if err != nil {
return fmt.Errorf("failed to delete job: %w", err)
}
}
job := jobDownloadUserCert.DeepCopy()
job.Namespace = namespace
_, err = kubeClient.BatchV1().Jobs(job.Namespace).Create(ctx, job, metav1.CreateOptions{})
if err != nil {
return fmt.Errorf("failed to create job: %w", err)
}
klog.Info("Job created: ", job.Name)
return nil
}
var jobDownloadUserCert = batchv1.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "download-user-cert",
},
Spec: batchv1.JobSpec{
BackoffLimit: ptr.To[int32](5),
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
RestartPolicy: corev1.RestartPolicyOnFailure,
Containers: []corev1.Container{
{
Name: "download-user-cert",
Image: "busybox:1.28",
Command: []string{"wget",
"--header",
"X-FROM-CRONJOB: true",
"-qSO -",
"http://bfl.user-space-pengpeng9/bfl/backend/v1/re-download-cert",
},
},
},
},
},
},
}

View File

@@ -3,10 +3,13 @@ package upgrade
import (
"context"
"fmt"
"github.com/Masterminds/semver/v3"
"github.com/beclab/Olares/daemon/pkg/utils"
"math"
"os"
"path/filepath"
"sync"
"time"
"github.com/beclab/Olares/daemon/internel/watcher"
"github.com/beclab/Olares/daemon/pkg/cluster/state"
@@ -19,6 +22,9 @@ type upgradeWatcher struct {
watcher.Watcher
sync.Mutex
upgrading bool
// Internal retry state
retryCount int
nextRetryTime *time.Time
}
func NewUpgradeWatcher() watcher.Watcher {
@@ -27,19 +33,72 @@ func NewUpgradeWatcher() watcher.Watcher {
}
func (w *upgradeWatcher) Watch(ctx context.Context) {
switch state.CurrentState.TerminusState {
// indicates an upgrade target exists
case state.Upgrading:
// if the upgrade process is running, just wait for it to finish
if !w.isUpgrading() {
go func() {
w.startUpgrading()
defer w.stopUpgrading()
if err := doUpgrade(ctx); err != nil {
klog.Errorf("upgrading error: %v", err)
}
}()
targetVersion, err := state.GetOlaresUpgradeTarget()
if err != nil {
klog.Errorf("failed to check upgrade target: %v", err)
return
}
if targetVersion == nil {
w.resetRetryState()
state.TerminusStateMu.Lock()
state.CurrentState.UpgradingState = ""
state.CurrentState.UpgradingTarget = ""
state.CurrentState.UpgradingRetryNum = 0
state.CurrentState.UpgradingNextRetryAt = nil
state.CurrentState.UpgradingStep = ""
state.CurrentState.UpgradingProgressNum = 0
state.CurrentState.UpgradingProgress = ""
state.CurrentState.UpgradingError = ""
state.CurrentState.UpgradingDownloadState = ""
state.CurrentState.UpgradingDownloadStep = ""
state.CurrentState.UpgradingDownloadProgressNum = 0
state.CurrentState.UpgradingDownloadProgress = ""
state.CurrentState.UpgradingDownloadError = ""
state.TerminusStateMu.Unlock()
return
}
dynamicClient, err := utils.GetDynamicClient()
if err != nil {
return
}
currentVersionStr, err := utils.GetTerminusVersion(ctx, dynamicClient)
if err != nil {
klog.Error("failed to get current version, skip upgrading check: ", err)
return
}
if currentVersionStr == nil {
klog.Error("current version is nil, skip upgrading check")
return
}
currentVersion, err := semver.NewVersion(*currentVersionStr)
if err != nil || currentVersion.LessThan(targetVersion) {
state.CurrentState.UpgradingTarget = targetVersion.Original()
} else {
err = upgrade.RemoveUpgradeFiles()
if err != nil {
klog.Error("failed to remove upgrade files: ", err)
}
return
}
if !w.isUpgrading() {
if !w.isTimeToRetry() {
return
}
go func() {
w.startUpgrading()
defer w.stopUpgrading()
if err := w.doUpgradeWithRetry(ctx); err != nil {
klog.Errorf("upgrading error: %v", err)
}
}()
}
}
@@ -61,52 +120,195 @@ func (w *upgradeWatcher) stopUpgrading() {
w.upgrading = false
}
func (w *upgradeWatcher) isTimeToRetry() bool {
w.Lock()
defer w.Unlock()
if w.nextRetryTime == nil {
return true
}
now := time.Now()
if now.Before(*w.nextRetryTime) {
klog.V(2).Infof("upgrade retry scheduled for %v (in %v)",
*w.nextRetryTime,
w.nextRetryTime.Sub(now))
return false
}
return true
}
func (w *upgradeWatcher) resetRetryState() {
w.Lock()
defer w.Unlock()
w.retryCount = 0
w.nextRetryTime = nil
}
func (w *upgradeWatcher) incrementRetry() {
w.Lock()
defer w.Unlock()
w.retryCount++
nextRetry := state.CalculateNextRetryTime(w.retryCount)
w.nextRetryTime = &nextRetry
}
func (w *upgradeWatcher) getRetryCount() int {
w.Lock()
defer w.Unlock()
return w.retryCount
}
func (w *upgradeWatcher) doUpgradeWithRetry(ctx context.Context) error {
err := doUpgrade(ctx)
if err != nil {
w.incrementRetry()
state.CurrentState.UpgradingRetryNum = w.getRetryCount()
state.CurrentState.UpgradingNextRetryAt = w.nextRetryTime
klog.Errorf("upgrade attempt %d failed: %v. Next retry scheduled for %v",
w.getRetryCount(), err, *w.nextRetryTime)
targetVersionDir := filepath.Join(commands.TERMINUS_BASE_DIR, "versions", "v"+state.CurrentState.UpgradingTarget)
prepareLogFile := filepath.Join(targetVersionDir, "install.log")
upgradeLogFile := filepath.Join(targetVersionDir, "upgrade.log")
for _, logFile := range []string{prepareLogFile, upgradeLogFile} {
if err := os.Remove(logFile); err != nil && !os.IsNotExist(err) {
klog.Errorf("failed to clear log file %s: %v", logFile, err)
}
}
}
return err
}
type upgradePhase struct {
newCMD func() commands.Interface
progressOffset int
progressSpan int
}
// todo: add a phase to upgrade olares-cli after the version of olares-cli and olares has been unified
var downloadPhases = []upgradePhase{
{upgrade.NewDownloadCLI, 0, 10},
{upgrade.NewDownloadWizard, 10, 20},
{upgrade.NewDownloadComponent, 30, 40},
}
var upgradePhases = []upgradePhase{
{upgrade.NewUpgradeCli, 0, 5},
{upgrade.NewDownloadWizard, 5, 10},
{upgrade.NewVersionCompatibilityCheck, 15, 0},
{upgrade.NewHealthCheck, 15, 0},
{upgrade.NewDownloadComponent, 15, 30},
{upgrade.NewPrepareImages, 45, 30},
{upgrade.NewPrepareOlaresd, 75, 5},
{upgrade.NewUpgrade, 80, 15},
{upgrade.NewVersionCompatibilityCheck, 0, 5},
{upgrade.NewHealthCheck, 5, 5},
{upgrade.NewInstallCLI, 10, 10},
{upgrade.NewImportImages, 20, 30},
{upgrade.NewInstallOlaresd, 50, 10},
{upgrade.NewUpgrade, 60, 35},
{upgrade.NewRemoveTarget, 95, 5},
}
func doUpgrade(ctx context.Context) (err error) {
downloadCompleted, err := state.IsUpgradeDownloadCompleted()
if err != nil {
return fmt.Errorf("failed to check download status: %v", err)
}
if !downloadCompleted {
// Execute download phases
if err := doDownloadPhases(ctx); err != nil {
return err
}
} else {
klog.Info("download already completed, skipping download phases")
state.CurrentState.UpgradingDownloadState = state.Completed
state.CurrentState.UpgradingDownloadProgress = "100%"
state.CurrentState.UpgradingDownloadProgressNum = 100
}
downloadOnly, err := state.IsUpgradeDownloadOnly()
if err != nil {
return fmt.Errorf("failed to check download-only status: %v", err)
}
if downloadOnly {
state.CurrentState.UpgradingState = "WaitingForUserConfirm"
klog.Info("download completed, waiting for user request to remove upgrade.downloadonly file to proceed with upgrade")
return nil
}
return doUpgradePhases(ctx)
}
func doDownloadPhases(ctx context.Context) (err error) {
defer func() {
if err != nil {
state.CurrentState.UpgradingDownloadState = state.Failed
state.CurrentState.UpgradingDownloadError = err.Error()
klog.Errorf("download phases failed: %v", err)
} else {
state.CurrentState.UpgradingDownloadState = state.Completed
state.CurrentState.UpgradingDownloadError = ""
if err := createUpgradeDownloadedFile(); err != nil {
klog.Errorf("failed to create upgrade.downloaded file: %v", err)
}
klog.Info("download phases completed successfully")
}
}()
state.CurrentState.UpgradingDownloadState = state.InProgress
state.CurrentState.UpgradingDownloadError = ""
for _, phase := range downloadPhases {
phaseCMD := phase.newCMD()
state.CurrentState.UpgradingDownloadStep = string(phaseCMD.OperationName())
res, err := phaseCMD.Execute(ctx, state.CurrentState.UpgradingTarget)
if err != nil {
return fmt.Errorf("error: download phase %s: %v", phaseCMD.OperationName(), err)
}
executionRes, ok := res.(upgrade.ExecutionRes)
if !ok {
return fmt.Errorf("unexpected result type for download phase %s", phaseCMD.OperationName())
}
if executionRes.Finished() {
continue
}
var phaseProgress int
for phaseProgress < 100 {
select {
case <-ctx.Done():
return nil
case p, ok := <-executionRes.Progress():
if !ok {
if phaseProgress != commands.ProgressNumFinished {
return fmt.Errorf("error: download phase %s: command execution did not succeed", phaseCMD.OperationName())
}
} else if p > phaseProgress {
klog.Infof("refreshing download phase %s, progress: %d", phaseCMD.OperationName(), phaseProgress)
phaseProgress = p
}
}
refreshDownloadProgressFromPhase(phase, phaseProgress)
}
}
return nil
}
func doUpgradePhases(ctx context.Context) (err error) {
defer func() {
if err != nil {
state.CurrentState.UpgradingState = state.Failed
state.CurrentState.UpgradingError = err.Error()
// clear logs after every failed attempt
// in case any under layer change that bypassed olaresd, e.g., manual removal of files
// is causing the upgrade retry to stuck forever
targetVersionDir := filepath.Join(commands.TERMINUS_BASE_DIR, "versions", "v"+state.CurrentState.UpgradingTarget)
prepareLogFile := filepath.Join(targetVersionDir, "install.log")
upgradeLogFile := filepath.Join(targetVersionDir, "upgrade.log")
for _, logFile := range []string{prepareLogFile, upgradeLogFile} {
if err := os.Remove(logFile); err != nil && !os.IsNotExist(err) {
klog.Errorf("failed to clear log file %s of current upgrade attempt (%d): %v", logFile, state.CurrentState.UpgradingRetryNum, err)
}
}
}
}()
state.CurrentState.UpgradingState = state.InProgress
state.CurrentState.UpgradingError = ""
state.CurrentState.UpgradingRetryNum += 1
state.StateTrigger <- struct{}{}
for _, phase := range upgradePhases {
phaseCMD := phase.newCMD()
state.CurrentState.UpgradingStep = string(phaseCMD.OperationName())
res, err := phaseCMD.Execute(ctx, state.CurrentState.UpgradingTarget)
if err != nil {
return fmt.Errorf("error: upgrade phase %s: %v", phaseCMD.OperationName(), err)
@@ -116,9 +318,6 @@ func doUpgrade(ctx context.Context) (err error) {
return fmt.Errorf("unexpected result type for upgrade phase %s", phaseCMD.OperationName())
}
if executionRes.Finished() {
// for now, do not update progress here
// as it may revert back the progress
// todo: if the retry num will be presented by the frontend to user, maybe we can update progress here
continue
}
var phaseProgress int
@@ -127,7 +326,6 @@ func doUpgrade(ctx context.Context) (err error) {
case <-ctx.Done():
return nil
case p, ok := <-executionRes.Progress():
// the command completed and the progress channel is closed
if !ok {
if phaseProgress != commands.ProgressNumFinished {
return fmt.Errorf("error: upgrade phase %s: command execution did not succeed", phaseCMD.OperationName())
@@ -140,8 +338,6 @@ func doUpgrade(ctx context.Context) (err error) {
refreshUpgradeProgressFromPhase(phase, phaseProgress)
}
}
// if the upgrade succeeded, the upgrade target will be removed
// and the upgrade status cleared
return nil
}
@@ -154,3 +350,17 @@ func refreshUpgradeProgressFromPhase(phase upgradePhase, phaseProgress int) {
state.CurrentState.UpgradingProgressNum = newProgress
state.CurrentState.UpgradingProgress = fmt.Sprintf("%d%%", state.CurrentState.UpgradingProgressNum)
}
func refreshDownloadProgressFromPhase(phase upgradePhase, phaseProgress int) {
spanProgress := math.Min(float64(phaseProgress)*float64(phase.progressSpan)/float64(commands.ProgressNumFinished), float64(phase.progressSpan))
newProgress := phase.progressOffset + int(math.Round(spanProgress))
if state.CurrentState.UpgradingDownloadProgressNum >= newProgress {
return
}
state.CurrentState.UpgradingDownloadProgressNum = newProgress
state.CurrentState.UpgradingDownloadProgress = fmt.Sprintf("%d%%", state.CurrentState.UpgradingDownloadProgressNum)
}
func createUpgradeDownloadedFile() error {
return os.WriteFile(commands.UPGRADE_DOWNLOADED_FILE, []byte(""), 0644)
}

View File

@@ -16,7 +16,6 @@ import (
"k8s.io/klog/v2"
"k8s.io/utils/pointer"
"github.com/Masterminds/semver/v3"
cpu "github.com/klauspost/cpuid/v2"
"github.com/pbnjay/memory"
)
@@ -59,12 +58,19 @@ type state struct {
UninstallingProgressNum int `json:"-"`
UpgradingTarget string `json:"upgradingTarget"`
UpgradingRetryNum int `json:"upgradingRetryNum"`
UpgradingNextRetryAt *time.Time `json:"upgradingNextRetryAt,omitempty"`
UpgradingState ProcessingState `json:"upgradingState"`
UpgradingStep string `json:"upgradingStep"`
UpgradingProgress string `json:"upgradingProgress"`
UpgradingProgressNum int `json:"-"`
UpgradingError string `json:"upgradingError"`
UpgradingDownloadState ProcessingState `json:"upgradingDownloadState"`
UpgradingDownloadStep string `json:"upgradingDownloadStep"`
UpgradingDownloadProgress string `json:"upgradingDownloadProgress"`
UpgradingDownloadProgressNum int `json:"-"`
UpgradingDownloadError string `json:"upgradingDownloadError"`
CollectingLogsState ProcessingState `json:"collectingLogsState"`
CollectingLogsError string `json:"collectingLogsError"`
@@ -97,6 +103,13 @@ func bToGb(b uint64) string {
func CheckCurrentStatus(ctx context.Context) error {
TerminusStateMu.Lock()
name, err := utils.GetOlaresNameFromReleaseFile()
if err != nil {
klog.Error("get olares name from release file error, ", err)
} else {
CurrentState.TerminusName = &name
}
var currentTerminusState TerminusState = CurrentState.TerminusState
defer func() {
CurrentState.TerminusState = currentTerminusState
@@ -151,6 +164,7 @@ func CheckCurrentStatus(ctx context.Context) error {
// get network info
ips, err := nets.GetInternalIpv4Addr()
if err != nil {
currentTerminusState = NetworkNotReady
return err
}
@@ -293,7 +307,6 @@ func CheckCurrentStatus(ctx context.Context) error {
currentTerminusState = NotInstalled
CurrentState.InstallingProgress = ""
CurrentState.InstallingState = ""
CurrentState.TerminusName = nil
CurrentState.InstalledTime = nil
CurrentState.InitializedTime = nil
@@ -332,33 +345,24 @@ func CheckCurrentStatus(ctx context.Context) error {
}
}
targetVersion, err := GetOlaresUpgradeTarget()
// only set system state to Upgrading if actual upgrade should be in progress
// (not during download phase)
upgradeTarget, err := GetOlaresUpgradeTarget()
if err != nil {
return err
return fmt.Errorf("error getting Olares upgrade target: %v", err.Error())
}
if targetVersion != nil {
// get current version and compare with target version
currentVersionStr, err := utils.GetTerminusVersion(ctx, dynamicClient)
if err != nil {
klog.Error("failed to get current version: ", err)
return err
}
currentVersion, err := semver.NewVersion(*currentVersionStr)
if err != nil || currentVersion.LessThan(targetVersion) {
CurrentState.UpgradingTarget = targetVersion.Original()
currentTerminusState = Upgrading
return nil
}
upgradeDownloadCompleted, err := IsUpgradeDownloadCompleted()
if err != nil {
return fmt.Errorf("error checking if upgrade download completed: %v", err.Error())
}
upgradeDownloadOnly, err := IsUpgradeDownloadOnly()
if err != nil {
return fmt.Errorf("error checking if upgrade download only: %v", err.Error())
}
if upgradeTarget != nil && upgradeDownloadCompleted && !upgradeDownloadOnly {
currentTerminusState = Upgrading
return nil
}
// not upgrading, reset upgrading status
CurrentState.UpgradingState = ""
CurrentState.UpgradingTarget = ""
CurrentState.UpgradingRetryNum = 0
CurrentState.UpgradingStep = ""
CurrentState.UpgradingProgressNum = 0
CurrentState.UpgradingProgress = ""
CurrentState.UpgradingError = ""
if tmsrunning, err := utils.IsTerminusRunning(ctx, kubeClient); err != nil {
currentTerminusState = SystemError

View File

@@ -0,0 +1,41 @@
package state
import (
"math"
"math/rand"
"time"
)
const (
retryBaseDelay = 5 * time.Second
retryMaxDelay = 10 * time.Minute
retryBackoffFactor = 2.0
)
func calculateNextRetryDelay(retryNum int) time.Duration {
backoffDelay := float64(retryBaseDelay) * math.Pow(retryBackoffFactor, float64(retryNum))
if backoffDelay > float64(retryMaxDelay) {
backoffDelay = float64(retryMaxDelay)
}
delay := time.Duration(backoffDelay)
jitter := float64(delay) * 0.25 * (rand.Float64()*2 - 1)
delay = time.Duration(float64(delay) + jitter)
if delay < 0 {
delay = retryBaseDelay
}
return delay
}
func calculateNextRetryTime(retryNum int) time.Time {
delay := calculateNextRetryDelay(retryNum)
return time.Now().Add(delay)
}
func CalculateNextRetryTime(retryNum int) time.Time {
return calculateNextRetryTime(retryNum)
}

View File

@@ -37,6 +37,7 @@ const (
Shutdown TerminusState = "shutdown"
Restarting TerminusState = "restarting"
Checking TerminusState = "checking"
NetworkNotReady TerminusState = "network-not-ready"
)
func (s TerminusState) String() string {

View File

@@ -143,6 +143,28 @@ func GetOlaresUpgradeTarget() (*semver.Version, error) {
return version, nil
}
func IsUpgradeDownloadOnly() (bool, error) {
_, err := os.Stat(commands.UPGRADE_DOWNLOADONLY_FILE)
if err != nil {
if os.IsNotExist(err) {
return false, nil
}
return false, err
}
return true, nil
}
func IsUpgradeDownloadCompleted() (bool, error) {
_, err := os.Stat(commands.UPGRADE_DOWNLOADED_FILE)
if err != nil {
if os.IsNotExist(err) {
return false, nil
}
return false, err
}
return true, nil
}
func IsIpChangeRunning() (bool, error) {
running, err := isProcessRunning(commands.CHANGINGIP_PID_FILE)
if err != nil {

View File

@@ -63,6 +63,11 @@ func (i *collectLogs) Execute(ctx context.Context, p any) (res any, err error) {
return
}
if adminUser == nil {
errStr = "admin user not found"
return
}
hostPath, err := utils.GetUserspacePvcHostPath(ctx, adminUser.GetName(), kubeClient)
if err != nil {
errStr = fmt.Sprintf("get admin user host path error, %v", err)

View File

@@ -16,20 +16,22 @@ var (
COMMAND_BASE_DIR = "" // deprecated shell command base dir
CDN_URL = "https://dc3p1870nn3cj.cloudfront.net"
OS_ROOT_DIR = "/olares"
INSTALLING_PID_FILE = "installing.pid"
UNINSTALLING_PID_FILE = "uninstalling.pid"
CHANGINGIP_PID_FILE = "changingip.pid"
UPGRADE_TARGET_FILE = "upgrade.target"
PREV_IP_TO_CHANGE_FILE = ".prev_ip"
PREV_IP_CHANGE_FAILED = ".ip_change_failed"
INSTALL_LOCK = ".installed"
LOG_FILE = "install.log"
TERMINUS_BASE_DIR = ""
MOUNT_BASE_DIR = path.Join(OS_ROOT_DIR, "share")
PREPARE_LOCK = ".prepared"
REDIS_CONF = OS_ROOT_DIR + "/data/redis/etc/redis.conf"
EXPORT_POD_LOGS_DIR = "Home/pod_logs"
OS_ROOT_DIR = "/olares"
INSTALLING_PID_FILE = "installing.pid"
UNINSTALLING_PID_FILE = "uninstalling.pid"
CHANGINGIP_PID_FILE = "changingip.pid"
UPGRADE_TARGET_FILE = "upgrade.target"
UPGRADE_DOWNLOADONLY_FILE = "upgrade.downloadonly"
UPGRADE_DOWNLOADED_FILE = "upgrade.downloaded"
PREV_IP_TO_CHANGE_FILE = ".prev_ip"
PREV_IP_CHANGE_FAILED = ".ip_change_failed"
INSTALL_LOCK = ".installed"
LOG_FILE = "install.log"
TERMINUS_BASE_DIR = ""
MOUNT_BASE_DIR = path.Join(OS_ROOT_DIR, "share")
PREPARE_LOCK = ".prepared"
REDIS_CONF = OS_ROOT_DIR + "/data/redis/etc/redis.conf"
EXPORT_POD_LOGS_DIR = "Home/pod_logs"
ProgressNumFinished = 100
)
@@ -45,6 +47,8 @@ func Init() {
UNINSTALLING_PID_FILE = filepath.Join(baseDir, UNINSTALLING_PID_FILE)
CHANGINGIP_PID_FILE = filepath.Join(baseDir, CHANGINGIP_PID_FILE)
UPGRADE_TARGET_FILE = filepath.Join(baseDir, UPGRADE_TARGET_FILE)
UPGRADE_DOWNLOADONLY_FILE = filepath.Join(baseDir, UPGRADE_DOWNLOADONLY_FILE)
UPGRADE_DOWNLOADED_FILE = filepath.Join(baseDir, UPGRADE_DOWNLOADED_FILE)
INSTALL_LOCK = filepath.Join(baseDir, INSTALL_LOCK)
PREPARE_LOCK = filepath.Join(baseDir, PREPARE_LOCK)
PREV_IP_TO_CHANGE_FILE = filepath.Join(baseDir, PREV_IP_TO_CHANGE_FILE)

View File

@@ -19,14 +19,15 @@ const (
Uninstall Operations = "uninstall"
CreateUpgradeTarget Operations = "createUpgradeTarget"
RemoveUpgradeTarget Operations = "removeUpgradeTarget"
DownloadCLI Operations = "downloadCLI"
DownloadWizard Operations = "downloadWizard"
VersionCompatibilityCheck Operations = "versionCompatibilityCheck"
UpgradeHealthCheck Operations = "upgradeHealthCheck"
DownloadComponent Operations = "downloadComponent"
PrepareImages Operations = "prepareImages"
PrepareOlaresd Operations = "prepareOlaresd"
ImportImages Operations = "importImages"
InstallOlaresd Operations = "installOlaresd"
Upgrade Operations = "upgrade"
UpgradeCli Operations = "upgradeCli"
InstallCLI Operations = "installCLI"
Reboot Operations = "reboot"
Shutdown Operations = "shutdown"
ConnectWifi Operations = "connectWifi"

View File

@@ -15,6 +15,7 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/apimachinery/pkg/util/yaml"
"k8s.io/klog/v2"
)
type versionCompatibilityCheck struct {
@@ -91,6 +92,19 @@ func NewHealthCheck() commands.Interface {
}
func (i *healthCheck) Execute(ctx context.Context, _ any) (res any, err error) {
klog.Info("Starting upgrade health check")
const minAvailableSpace = 100 * 1024 * 1024 * 1024 // 100GB in bytes
availableSpace, err := utils.GetDiskAvailableSpace("/")
if err != nil {
return nil, fmt.Errorf("error checking disk space: %s", err)
}
klog.Infof("Root partition available space: %.2fGB", float64(availableSpace)/(1024*1024*1024))
if availableSpace < minAvailableSpace {
return nil, fmt.Errorf("insufficient disk space: %.2fGB available, minimum 100GB required",
float64(availableSpace)/(1024*1024*1024))
}
client, err := utils.GetKubeClient()
if err != nil {
return nil, fmt.Errorf("error getting kubernetes client: %s", err)
@@ -132,5 +146,33 @@ func (i *healthCheck) Execute(ctx context.Context, _ any) (res any, err error) {
}
}
criticalNamespaces := []string{"os-platform", "os-framework"}
for _, namespace := range criticalNamespaces {
pods, err := client.CoreV1().Pods(namespace).List(ctx, metav1.ListOptions{})
if err != nil {
return nil, fmt.Errorf("error listing pods in namespace %s: %s", namespace, err)
}
for _, pod := range pods.Items {
if pod.Status.Phase == corev1.PodSucceeded {
continue
}
podStatus := utils.GetPodStatus(&pod)
if podStatus != "Running" && podStatus != "Completed" {
klog.Errorf("Pod %s/%s is not healthy: %s", namespace, pod.Name, podStatus)
return nil, fmt.Errorf("pod %s/%s is not healthy: %s", namespace, pod.Name, podStatus)
}
if !utils.IsPodReady(&pod) && pod.Status.Phase == corev1.PodRunning {
klog.Warningf("Pod %s/%s is running but not ready", namespace, pod.Name)
return nil, fmt.Errorf("pod %s/%s is running but not ready", namespace, pod.Name)
}
}
}
klog.Info("health checks passed for upgrade")
return newExecutionRes(true, nil), nil
}

View File

@@ -0,0 +1,84 @@
package upgrade
import (
"context"
"errors"
"fmt"
"os"
"github.com/beclab/Olares/daemon/pkg/cluster/state"
"github.com/beclab/Olares/daemon/pkg/commands"
)
type UpgradeRequest struct {
Version string `json:"version"`
DownloadOnly bool `json:"downloadOnly"`
}
type createUpgradeTarget struct {
commands.Operation
}
var _ commands.Interface = &createUpgradeTarget{}
func NewCreateUpgradeTarget() commands.Interface {
return &createUpgradeTarget{
Operation: commands.Operation{
Name: commands.CreateUpgradeTarget,
},
}
}
func (i *createUpgradeTarget) Execute(ctx context.Context, p any) (res any, err error) {
req, ok := p.(UpgradeRequest)
if !ok {
return nil, errors.New("invalid param")
}
if err := checkVersionConflicts(req.Version); err != nil {
return nil, err
}
if err := createUpgradeTargetFile(req.Version); err != nil {
return nil, fmt.Errorf("failed to create upgrade target: %v", err)
}
if req.DownloadOnly {
if err := createUpgradeDownloadOnlyFile(); err != nil {
return nil, fmt.Errorf("failed to create upgrade downloadonly file: %v", err)
}
} else {
if err := removeUpgradeDownloadOnlyFile(); err != nil && !os.IsNotExist(err) {
return nil, fmt.Errorf("failed to remove upgrade downloadonly file: %v", err)
}
}
state.StateTrigger <- struct{}{}
return NewExecutionRes(true, nil), nil
}
func checkVersionConflicts(version string) error {
if state.CurrentState.UpgradingState == state.InProgress {
return fmt.Errorf("system is currently upgrading")
}
upgradeTarget, err := state.GetOlaresUpgradeTarget()
if err == nil && upgradeTarget != nil && upgradeTarget.Original() != version {
return fmt.Errorf("different upgrade version %s already exists, please cancel it first", upgradeTarget.Original())
}
return nil
}
func createUpgradeTargetFile(version string) error {
return os.WriteFile(commands.UPGRADE_TARGET_FILE, []byte(version), 0755)
}
func createUpgradeDownloadOnlyFile() error {
return os.WriteFile(commands.UPGRADE_DOWNLOADONLY_FILE, []byte(""), 0755)
}
func removeUpgradeDownloadOnlyFile() error {
return os.Remove(commands.UPGRADE_DOWNLOADONLY_FILE)
}

View File

@@ -0,0 +1,85 @@
package upgrade
import (
"context"
"errors"
"fmt"
"os"
"path/filepath"
"runtime"
"github.com/Masterminds/semver/v3"
"github.com/beclab/Olares/daemon/pkg/commands"
"k8s.io/klog/v2"
)
type downloadCLI struct {
commands.Operation
}
var _ commands.Interface = &downloadCLI{}
func NewDownloadCLI() commands.Interface {
return &downloadCLI{
Operation: commands.Operation{
Name: commands.DownloadCLI,
},
}
}
func (i *downloadCLI) Execute(ctx context.Context, p any) (res any, err error) {
version, ok := p.(string)
if !ok {
return nil, errors.New("invalid param")
}
targetVersion, err := semver.NewVersion(version)
if err != nil {
return nil, fmt.Errorf("invalid target version %s: %v", version, err)
}
currentVersion, err := getCurrentCliVersion()
if err != nil {
// if we can't get the current version, assume we need to download
klog.Warningf("Failed to get current olares-cli version: %v, proceeding with download", err)
} else {
if !currentVersion.LessThan(targetVersion) {
return newExecutionRes(true, nil), nil
}
}
arch := "amd64"
if runtime.GOARCH == "arm" {
arch = "arm64"
}
destDir := filepath.Join(commands.TERMINUS_BASE_DIR, "pkg", "components")
if err := os.MkdirAll(destDir, 0755); err != nil {
return nil, fmt.Errorf("failed to create components directory: %v", err)
}
downloadURL := fmt.Sprintf("%s/olares-cli-v%s_linux_%s.tar.gz", commands.CDN_URL, version, arch)
tarFile := filepath.Join(destDir, fmt.Sprintf("olares-cli-v%s.tar.gz", version))
if err := downloadFile(downloadURL, tarFile); err != nil {
return nil, fmt.Errorf("failed to download olares-cli: %v", err)
}
if err := extractTarGz(tarFile, destDir); err != nil {
return nil, fmt.Errorf("failed to extract olares-cli: %v", err)
}
binaryPath := filepath.Join(destDir, "olares-cli")
versionedPath := filepath.Join(destDir, fmt.Sprintf("olares-cli-v%s", version))
if err := os.Rename(binaryPath, versionedPath); err != nil {
return nil, fmt.Errorf("failed to rename olares-cli binary: %v", err)
}
if err := os.Chmod(versionedPath, 0755); err != nil {
return nil, fmt.Errorf("failed to make olares-cli executable: %v", err)
}
os.Remove(tarFile)
return newExecutionRes(true, nil), nil
}

View File

@@ -26,10 +26,10 @@ type prepareImages struct {
var _ commands.Interface = &prepareImages{}
func NewPrepareImages() commands.Interface {
func NewImportImages() commands.Interface {
return &prepareImages{
Operation: commands.Operation{
Name: commands.PrepareImages,
Name: commands.ImportImages,
},
progressKeywords: []progressKeyword{
{"Preload Container Images execute successfully", commands.ProgressNumFinished},
@@ -119,7 +119,7 @@ func (i *prepareImages) refreshProgress() error {
if strings.Contains(line.Text, p.KeyWord) {
lineProgress = p.ProgressNum
} else {
lineProgress = parseProgressFromItemProgress(line.Text)
lineProgress = parseImagePrepareProgressByItemProgress(line.Text)
}
if i.progress < lineProgress {
i.progress = lineProgress

View File

@@ -0,0 +1,67 @@
package upgrade
import (
"context"
"errors"
"fmt"
"github.com/Masterminds/semver/v3"
"github.com/beclab/Olares/daemon/pkg/commands"
"k8s.io/klog/v2"
"os"
"os/exec"
"path/filepath"
)
type installCLI struct {
commands.Operation
}
var _ commands.Interface = &installCLI{}
func NewInstallCLI() commands.Interface {
return &installCLI{
Operation: commands.Operation{
Name: commands.InstallCLI,
},
}
}
func (i *installCLI) Execute(ctx context.Context, p any) (res any, err error) {
version, ok := p.(string)
if !ok {
return nil, errors.New("invalid param")
}
targetVersion, err := semver.NewVersion(version)
if err != nil {
return nil, fmt.Errorf("invalid target version %s: %v", version, err)
}
currentVersion, err := getCurrentCliVersion()
if err != nil {
klog.Warningf("Failed to get current olares-cli version: %v, proceeding with installation", err)
} else {
if !currentVersion.LessThan(targetVersion) {
return newExecutionRes(true, nil), nil
}
}
preDownloadedPath := filepath.Join(commands.TERMINUS_BASE_DIR, "pkg", "components", fmt.Sprintf("olares-cli-v%s", version))
if _, err := os.Stat(preDownloadedPath); err != nil {
klog.Warningf("Failed to find pre-downloaded binary path %s: %v", preDownloadedPath, err)
return newExecutionRes(false, nil), err
}
cmd := exec.Command("cp", "-f", preDownloadedPath, "/usr/local/bin/olares-cli")
err = cmd.Run()
if err != nil {
klog.Warningf("Failed to install olares-cli: %v", err)
return newExecutionRes(false, nil), err
}
if err := os.Chmod("/usr/local/bin/olares-cli", 0755); err != nil {
return nil, fmt.Errorf("failed to make olares-cli executable: %v", err)
}
return newExecutionRes(true, nil), nil
}

View File

@@ -10,6 +10,7 @@ import (
"strings"
"time"
semver "github.com/Masterminds/semver/v3"
"github.com/beclab/Olares/daemon/pkg/cli"
"github.com/beclab/Olares/daemon/pkg/commands"
"github.com/nxadm/tail"
@@ -26,10 +27,10 @@ type prepareOlaresd struct {
var _ commands.Interface = &prepareOlaresd{}
func NewPrepareOlaresd() commands.Interface {
func NewInstallOlaresd() commands.Interface {
return &prepareOlaresd{
Operation: commands.Operation{
Name: commands.PrepareOlaresd,
Name: commands.InstallOlaresd,
},
progressKeywords: []progressKeyword{
{"ReplaceOlaresdBinary success", 30},
@@ -52,6 +53,19 @@ func (i *prepareOlaresd) Execute(ctx context.Context, p any) (res any, err error
if !ok {
return nil, errors.New("invalid param")
}
targetVersion, err := semver.NewVersion(version)
if err != nil {
return nil, fmt.Errorf("invalid target version %s: %v", version, err)
}
currentVersion, err := getCurrentDaemonVersion()
if err != nil {
klog.Warningf("Failed to get current olaresd version: %v, proceeding with installation", err)
} else {
if !currentVersion.LessThan(targetVersion) {
return newExecutionRes(true, nil), nil
}
}
i.logFile = filepath.Join(commands.TERMINUS_BASE_DIR, "versions", "v"+version, "logs", "install.log")
if err := i.refreshProgress(); err != nil {

View File

@@ -0,0 +1,57 @@
package upgrade
import (
"context"
"os"
"github.com/beclab/Olares/daemon/pkg/cluster/state"
"github.com/beclab/Olares/daemon/pkg/commands"
)
type removeUpgradeTarget struct {
commands.Operation
}
var _ commands.Interface = &removeUpgradeTarget{}
func NewRemoveUpgradeTarget() commands.Interface {
return &removeUpgradeTarget{
Operation: commands.Operation{
Name: commands.RemoveUpgradeTarget,
},
}
}
func (i *removeUpgradeTarget) Execute(ctx context.Context, p any) (res any, err error) {
err = RemoveUpgradeFiles()
if err != nil {
return nil, err
}
state.CurrentState.UpgradingDownloadState = ""
state.CurrentState.UpgradingDownloadStep = ""
state.CurrentState.UpgradingDownloadProgress = ""
state.CurrentState.UpgradingDownloadProgressNum = 0
state.CurrentState.UpgradingDownloadError = ""
state.StateTrigger <- struct{}{}
return NewExecutionRes(true, nil), nil
}
func RemoveUpgradeFiles() error {
// attempt to remove all files whether they exist or not (idempotent)
files := []string{
commands.UPGRADE_TARGET_FILE,
commands.UPGRADE_DOWNLOADONLY_FILE,
commands.UPGRADE_DOWNLOADED_FILE,
}
for _, file := range files {
if err := os.Remove(file); err != nil && !os.IsNotExist(err) {
return err
}
}
return nil
}

View File

@@ -31,6 +31,10 @@ func newExecutionRes(finished bool, progressChan <-chan int) ExecutionRes {
}
}
func NewExecutionRes(finished bool, progressChan <-chan int) ExecutionRes {
return newExecutionRes(finished, progressChan)
}
type progressKeyword struct {
KeyWord string
ProgressNum int
@@ -41,7 +45,7 @@ var itemProcessProgressRE = regexp.MustCompile(`\((\d+)/(\d+)\)`)
func parseProgressFromItemProgress(line string) int {
matches := itemProcessProgressRE.FindAllStringSubmatch(line, 2)
if len(matches) != 3 {
if len(matches) != 1 || len(matches[0]) != 3 {
return 0
}
indexStr, totalStr := matches[0][1], matches[0][2]
@@ -53,5 +57,5 @@ func parseProgressFromItemProgress(line string) int {
if total == 0 || err != nil {
return 0
}
return int(math.Round(index / total * 90))
return int(math.Round((index / total) * 90.0))
}

View File

@@ -11,7 +11,6 @@ import (
"time"
"github.com/beclab/Olares/daemon/pkg/cli"
"github.com/beclab/Olares/daemon/pkg/cluster/state"
"github.com/beclab/Olares/daemon/pkg/commands"
"github.com/nxadm/tail"
"k8s.io/klog/v2"
@@ -155,41 +154,6 @@ func (i *upgrade) refreshProgress() error {
return nil
}
type createTarget struct {
commands.Operation
}
var _ commands.Interface = &createTarget{}
func NewCreateTarget() commands.Interface {
return &createTarget{
Operation: commands.Operation{
Name: commands.CreateUpgradeTarget,
},
}
}
func (i *createTarget) Execute(ctx context.Context, p any) (res any, err error) {
version, ok := p.(string)
if !ok {
err = errors.New("invalid param")
return
}
if err := createUpgradeTarget(version); err != nil {
return nil, fmt.Errorf("failed to create upgrade target: %v", err)
}
state.StateTrigger <- struct{}{}
return nil, nil
}
func createUpgradeTarget(version string) error {
return os.WriteFile(commands.UPGRADE_TARGET_FILE, []byte(version), 0755)
}
type removeTarget struct {
commands.Operation
}
@@ -204,10 +168,7 @@ func NewRemoveTarget() commands.Interface {
}
}
func (i *removeTarget) Execute(_ context.Context, _ any) (res any, err error) {
if err := os.Remove(commands.UPGRADE_TARGET_FILE); err != nil && !os.IsNotExist(err) {
return nil, err
}
return newExecutionRes(true, nil), nil
func (i *removeTarget) Execute(ctx context.Context, p any) (res any, err error) {
upgradeRemove := NewRemoveUpgradeTarget()
return upgradeRemove.Execute(ctx, p)
}

View File

@@ -1,135 +0,0 @@
package upgrade
import (
"context"
"errors"
"fmt"
"io"
"net/http"
"os"
"os/exec"
"path/filepath"
"runtime"
"strings"
"github.com/Masterminds/semver/v3"
"github.com/beclab/Olares/daemon/pkg/commands"
"k8s.io/klog/v2"
)
type upgradeCli struct {
commands.Operation
}
var _ commands.Interface = &upgradeCli{}
func NewUpgradeCli() commands.Interface {
return &upgradeCli{
Operation: commands.Operation{
Name: commands.UpgradeCli,
},
}
}
func (i *upgradeCli) Execute(ctx context.Context, p any) (res any, err error) {
version, ok := p.(string)
if !ok {
return nil, errors.New("invalid param")
}
targetVersion, err := semver.NewVersion(version)
if err != nil {
return nil, fmt.Errorf("invalid target version %s: %v", version, err)
}
currentVersion, err := getCurrentCliVersion()
if err != nil {
// if we can't get the current version, assume we need to upgrade
klog.Warningf("Failed to get current olares-cli version: %v, proceeding with upgrade", err)
} else {
if !currentVersion.LessThan(targetVersion) {
return newExecutionRes(true, nil), nil
}
}
arch := "amd64"
if runtime.GOARCH == "arm" {
arch = "arm64"
}
tmpDir, err := os.MkdirTemp("", "olares-cli-upgrade-*")
if err != nil {
return nil, fmt.Errorf("failed to create temp directory: %v", err)
}
defer os.RemoveAll(tmpDir)
downloadURL := fmt.Sprintf("%s/olares-cli-v%s_linux_%s.tar.gz", commands.CDN_URL, version, arch)
tarFile := filepath.Join(tmpDir, "olares-cli.tar.gz")
if err := downloadFile(downloadURL, tarFile); err != nil {
return nil, fmt.Errorf("failed to download olares-cli: %v", err)
}
if err := extractTarGz(tarFile, tmpDir); err != nil {
return nil, fmt.Errorf("failed to extract olares-cli: %v", err)
}
binaryPath := filepath.Join(tmpDir, "olares-cli")
if err := os.Rename(binaryPath, "/usr/local/bin/olares-cli"); err != nil {
return nil, fmt.Errorf("failed to move olares-cli to /usr/local/bin: %v", err)
}
if err := os.Chmod("/usr/local/bin/olares-cli", 0755); err != nil {
return nil, fmt.Errorf("failed to make olares-cli executable: %v", err)
}
return newExecutionRes(true, nil), nil
}
func getCurrentCliVersion() (*semver.Version, error) {
cmd := exec.Command("olares-cli", "-v")
output, err := cmd.Output()
if err != nil {
return nil, fmt.Errorf("failed to execute olares-cli -v: %v", err)
}
// parse version from output
// expected format: "olares-cli version ${VERSION}"
parts := strings.Split(string(output), " ")
if len(parts) != 3 {
return nil, fmt.Errorf("unexpected version output format: %s", string(output))
}
version, err := semver.NewVersion(parts[2])
if err != nil {
return nil, fmt.Errorf("invalid version format: %v", err)
}
return version, nil
}
func downloadFile(url, filepath string) error {
resp, err := http.Get(url)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("bad status: %s", resp.Status)
}
out, err := os.Create(filepath)
if err != nil {
return err
}
defer out.Close()
_, err = io.Copy(out, resp.Body)
return err
}
func extractTarGz(tarFile, destDir string) error {
cmd := exec.Command("tar", "-xzf", tarFile, "-C", destDir)
return cmd.Run()
}

View File

@@ -0,0 +1,99 @@
package upgrade
import (
"fmt"
"io"
"net/http"
"os"
"os/exec"
"strings"
"github.com/Masterminds/semver/v3"
)
func getCurrentCliVersion() (*semver.Version, error) {
cmd := exec.Command("olares-cli", "-v")
output, err := cmd.Output()
if err != nil {
return nil, fmt.Errorf("failed to execute olares-cli -v: %v", err)
}
// parse version from output
// expected format: "olares-cli version ${VERSION}"
parts := strings.Split(string(output), " ")
if len(parts) != 3 {
return nil, fmt.Errorf("unexpected version output format: %s", string(output))
}
version, err := semver.NewVersion(strings.TrimSpace(parts[2]))
if err != nil {
return nil, fmt.Errorf("invalid version format: %v", err)
}
return version, nil
}
func getCurrentDaemonVersion() (*semver.Version, error) {
cmd := exec.Command("olaresd", "--version")
output, err := cmd.Output()
if err != nil {
return nil, fmt.Errorf("failed to execute olaresd --version: %v", err)
}
// parse version from output
// expected format: "olaresd version: v${VERSION}"
parts := strings.Split(string(output), " ")
if len(parts) != 3 {
return nil, fmt.Errorf("unexpected version output format: %s", string(output))
}
version, err := semver.NewVersion(strings.TrimPrefix(strings.TrimSpace(parts[2]), "v"))
if err != nil {
return nil, fmt.Errorf("invalid version format: %v", err)
}
return version, nil
}
func downloadFile(url, filepath string) error {
resp, err := http.Get(url)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("bad status: %s", resp.Status)
}
out, err := os.Create(filepath)
if err != nil {
return err
}
defer out.Close()
_, err = io.Copy(out, resp.Body)
return err
}
func extractTarGz(tarFile, destDir string) error {
cmd := exec.Command("tar", "-xzf", tarFile, "-C", destDir)
return cmd.Run()
}
func copyFile(src, dst string) error {
sourceFile, err := os.Open(src)
if err != nil {
return err
}
defer sourceFile.Close()
destFile, err := os.Create(dst)
if err != nil {
return err
}
defer destFile.Close()
_, err = io.Copy(destFile, sourceFile)
return err
}

View File

@@ -16,3 +16,15 @@ func GetDiskSize() (uint64, error) {
size := fs.Blocks * uint64(fs.Bsize)
return size, nil
}
func GetDiskAvailableSpace(path string) (uint64, error) {
fs := syscall.Statfs_t{}
err := syscall.Statfs(path, &fs)
if err != nil {
klog.Error("get disk available space error, ", err)
return 0, err
}
available := fs.Bavail * uint64(fs.Bsize)
return available, nil
}

View File

@@ -89,7 +89,7 @@ func detectdStorageDevices(ctx context.Context, bus string) (usbDevs []storageDe
token := strings.Split(syspath, "/")
devPath := filepath.Join("/dev", token[len(token)-1])
klog.Info("device path:", device.Properties())
klog.V(8).Info("device path:", device.Properties())
vender := device.Properties()["ID_VENDOR"]
if vender == "" {
vender = device.Properties()["ID_USB_VENDOR"]
@@ -162,7 +162,7 @@ func getMountedPath(devs []storageDevice) ([]string, error) {
var paths []string
for _, m := range list {
if slices.ContainsFunc(devs, func(u storageDevice) bool { return u.DevPath == m.Device }) {
klog.Infof("mount: %v, %v, %v", m.Path, m.Device, devs)
klog.V(8).Infof("mount: %v, %v, %v", m.Path, m.Device, devs)
paths = append(paths, m.Path)
}
}

View File

@@ -278,25 +278,48 @@ func GetAdminUserTerminusName(ctx context.Context, client dynamic.Interface) (st
}
type Filter func(u *unstructured.Unstructured) bool
func GetAdminUser(ctx context.Context, client dynamic.Interface) (*unstructured.Unstructured, error) {
u, err := ListUsers(ctx, client, func(u *unstructured.Unstructured) bool {
role, ok := u.GetAnnotations()[bflconst.UserAnnotationOwnerRole]
if !ok {
return false
}
return role == bflconst.RolePlatformAdmin
})
if err != nil {
klog.Error("list user error, ", err)
return nil, err
}
if len(u) == 0 {
klog.Info("admin user not found")
return nil, nil
}
return u[0], nil
}
func ListUsers(ctx context.Context, client dynamic.Interface, filters ...Filter) ([]*unstructured.Unstructured, error) {
users, err := client.Resource(UserGVR).List(ctx, metav1.ListOptions{})
if err != nil {
klog.Error("list user error, ", err)
return nil, err
}
var userList []*unstructured.Unstructured
for _, u := range users.Items {
role, ok := u.GetAnnotations()[bflconst.UserAnnotationOwnerRole]
if !ok {
continue
for _, filter := range filters {
if !filter(&u) {
continue
}
}
if role == bflconst.RolePlatformAdmin {
return &u, nil
}
userList = append(userList, &u)
}
return nil, nil
return userList, nil
}
func isKeyPod(pod *corev1.Pod) bool {

View File

@@ -11,7 +11,7 @@ const CHECK_CONNECTIVITY_URL = "http://connectivity-check.ubuntu.com/"
func CheckInterfaceIPv4Connectivity(ctx context.Context, interfaceName string) bool {
// try to connect to the CHECK_CONNECTIVITY_URL using the specified interface
cmd := exec.CommandContext(ctx, "curl", "--interface", interfaceName, "--connect-timeout", "5", "-s", "-o", "/dev/null", CHECK_CONNECTIVITY_URL)
cmd := exec.CommandContext(ctx, "curl", "-4", "--interface", interfaceName, "--connect-timeout", "5", "-s", "-o", "/dev/null", CHECK_CONNECTIVITY_URL)
if err := cmd.Run(); err == nil {
return true
}

View File

@@ -81,7 +81,16 @@ func GetWifiDevice(ctx context.Context) (map[string]Device, error) {
}
func GetAllDevice(ctx context.Context) (map[string]Device, error) {
return deviceStatus(ctx, func(d *Device) bool { return true })
return deviceStatus(ctx, func(d *Device) bool {
managedByOthers := []string{"cali", "kube", "tun", "tailscale"}
for _, devPrefix := range managedByOthers {
if strings.HasPrefix(d.Name, devPrefix) {
return false
}
}
return true
})
}
func ManagedAllDevices(ctx context.Context) (map[string]Device, error) {
@@ -102,7 +111,6 @@ func ManagedAllDevices(ctx context.Context) (map[string]Device, error) {
cmd := exec.CommandContext(ctx, nmcli, "device", "set", d.Name, "managed", "yes")
cmd.Env = os.Environ()
output, err := cmd.CombinedOutput()
klog.Info(string(output))
if err != nil {
klog.Error("exec cmd error, ", err, ", nmcli device set ", d.Name, " managed yes")
return false
@@ -252,17 +260,17 @@ func showDeviceByNM(ctx context.Context, deviceName string, device *Device) erro
switch key {
case "IP4.ADDRESS[1]":
ipAndMask := strings.Split(value, "/")
if len(ipAndMask) > 2 {
if len(ipAndMask) > 1 {
device.Ipv4Address = ipAndMask[0]
cidr, err := strconv.Atoi(ipAndMask[1])
if err != nil {
klog.Error("convert cidr error, ", err)
return err
continue
}
mask, err := MaskFromCIDR(cidr)
if err != nil {
klog.Error("get mask from cidr error, ", err)
return err
continue
}
device.Ipv4Mask = mask
}
@@ -279,7 +287,7 @@ func showDeviceByNM(ctx context.Context, deviceName string, device *Device) erro
case "GENERAL.CONNECTION":
err := showConnectionByNM(ctx, value, device)
if err != nil {
klog.Error("get connection method error, ", err)
klog.V(8).Info("get connection method error, ", err, ", connection name: ", value)
}
default:
continue

View File

@@ -0,0 +1,96 @@
package utils
import (
"fmt"
corev1 "k8s.io/api/core/v1"
)
// GetPodStatus returns a kubectl-like status string for a pod
// because the pod.Status.Phase field is unreliable
func GetPodStatus(pod *corev1.Pod) string {
if pod.DeletionTimestamp != nil {
if pod.Status.Reason == "NodeLost" {
return "Unknown"
}
return "Terminating"
}
for i, container := range pod.Status.InitContainerStatuses {
if container.State.Terminated != nil && container.State.Terminated.ExitCode == 0 {
continue
}
if container.State.Terminated != nil {
if container.State.Terminated.Reason != "" {
return fmt.Sprintf("Init:%s", container.State.Terminated.Reason)
}
if container.State.Terminated.Signal != 0 {
return fmt.Sprintf("Init:Signal:%d", container.State.Terminated.Signal)
}
return fmt.Sprintf("Init:ExitCode:%d", container.State.Terminated.ExitCode)
}
if container.State.Waiting != nil && container.State.Waiting.Reason != "" {
return fmt.Sprintf("Init:%s", container.State.Waiting.Reason)
}
return fmt.Sprintf("Init:%d/%d", i, len(pod.Spec.InitContainers))
}
hasRunning := false
for _, container := range pod.Status.ContainerStatuses {
if container.State.Waiting != nil && container.State.Waiting.Reason != "" {
return container.State.Waiting.Reason
}
if container.State.Terminated != nil && container.State.Terminated.Reason != "" {
return container.State.Terminated.Reason
}
if container.State.Terminated != nil && container.State.Terminated.Reason == "" {
if container.State.Terminated.Signal != 0 {
return fmt.Sprintf("Signal:%d", container.State.Terminated.Signal)
}
return fmt.Sprintf("ExitCode:%d", container.State.Terminated.ExitCode)
}
if container.State.Running != nil && container.Ready {
hasRunning = true
}
}
for _, condition := range pod.Status.Conditions {
if condition.Type == corev1.PodReady && condition.Status == corev1.ConditionFalse && condition.Reason != "" {
return condition.Reason
}
}
if pod.Status.Phase == corev1.PodRunning && hasRunning {
return "Running"
}
if pod.Status.Phase != "" {
return string(pod.Status.Phase)
}
if pod.Status.Reason != "" {
return pod.Status.Reason
}
return "Unknown"
}
// IsPodReady checks if a pod is fully ready (all containers ready)
func IsPodReady(pod *corev1.Pod) bool {
if pod.Status.Phase != corev1.PodRunning {
return false
}
for _, condition := range pod.Status.Conditions {
if condition.Type == corev1.PodReady {
return condition.Status == corev1.ConditionTrue
}
}
return false
}

View File

@@ -110,3 +110,20 @@ func MoveFile(sourcePath, destPath string) error {
return nil
}
func GetOlaresNameFromReleaseFile() (string, error) {
data, err := godotenv.Read("/etc/olares/release")
if err != nil {
if os.IsNotExist(err) {
return "", fmt.Errorf("olars release file not found")
}
return "", fmt.Errorf("read olars release file error: %w", err)
}
name := data["OLARES_NAME"]
if name == "" {
return "", fmt.Errorf("olars name not found in release file")
}
return name, nil
}

View File

@@ -83,7 +83,7 @@ spec:
value: os_framework_analytics
containers:
- name: analytics-server
image: beclab/analytics-api:v0.0.6
image: beclab/analytics-api:v0.0.7
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3010

View File

@@ -0,0 +1,142 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.14.0
name: appimages.app.bytetrade.io
spec:
group: app.bytetrade.io
names:
categories:
- all
kind: AppImage
listKind: AppImageList
plural: appimages
shortNames:
- appimage
singular: appimage
scope: Cluster
versions:
- additionalPrinterColumns:
- jsonPath: .spec.appName
name: application name
type: string
- jsonPath: .status.state
name: state
type: string
- jsonPath: .metadata.creationTimestamp
name: age
type: date
name: v1alpha1
schema:
openAPIV3Schema:
description: AppImage is the Schema for the image managers API
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
properties:
appName:
type: string
nodes:
items:
type: string
type: array
refs:
items:
type: string
type: array
required:
- appName
- nodes
- refs
type: object
status:
properties:
conditions:
items:
properties:
completed:
type: boolean
node:
type: string
required:
- completed
- node
type: object
type: array
images:
items:
properties:
architecture:
type: string
layersData:
items:
properties:
annotations:
additionalProperties:
type: string
type: object
digest:
type: string
mediaType:
type: string
offset:
format: int64
type: integer
size:
format: int64
type: integer
required:
- digest
- mediaType
- offset
- size
type: object
type: array
name:
type: string
node:
type: string
os:
type: string
variant:
type: string
required:
- layersData
- name
- node
type: object
type: array
message:
type: string
state:
type: string
statueTime:
format: date-time
type: string
required:
- state
- statueTime
type: object
type: object
served: true
storage: true
subresources:
status: {}

View File

@@ -3,8 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.9.2
creationTimestamp: null
controller-gen.kubebuilder.io/version: v0.14.0
name: applicationmanagers.app.bytetrade.io
spec:
group: app.bytetrade.io
@@ -39,14 +38,19 @@ spec:
API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
@@ -85,12 +89,16 @@ spec:
opGeneration:
format: int64
type: integer
opId:
type: string
opRecords:
items:
description: OpRecord contains details of an operation.
properties:
message:
type: string
opId:
type: string
opType:
description: OpType represents the type of operation being performed.
type: string

View File

@@ -94,6 +94,8 @@ spec:
title:
description: Optional. if invisible=true.
type: string
url:
type: string
windowPushState:
type: boolean
required:

View File

@@ -3,8 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.9.2
creationTimestamp: null
controller-gen.kubebuilder.io/version: v0.14.0
name: imagemanagers.app.bytetrade.io
spec:
group: app.bytetrade.io
@@ -38,14 +37,19 @@ spec:
description: ImageManager is the Schema for the image managers API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
@@ -66,6 +70,8 @@ spec:
items:
properties:
imagePullPolicy:
description: PullPolicy describes a policy for if/when to pull
a container image
type: string
name:
type: string
@@ -89,12 +95,16 @@ spec:
type: string
type: object
type: object
description: 'INSERT ADDITIONAL STATUS FIELD - define observed state
of cluster Important: Run "make" to regenerate code after modifying
this file'
description: |-
INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
Important: Run "make" to regenerate code after modifying this file
type: object
message:
type: string
nodeDownloadStatus:
additionalProperties:
type: string
type: object
state:
type: string
statusTime:

View File

@@ -163,7 +163,7 @@ spec:
priorityClassName: "system-cluster-critical"
containers:
- name: app-service
image: beclab/app-service:0.3.46
image: beclab/app-service:0.3.47
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
@@ -213,6 +213,8 @@ spec:
value: os.users
- name: NATS_SUBJECT_SYSTEM_GROUPS
value: os.groups
- name: NATS_SUBJECT_SYSTEM_APPLICATION
value: os.application
- name: APP_RANDOM_KEY
valueFrom:
secretKeyRef:
@@ -396,7 +398,7 @@ spec:
hostNetwork: true
containers:
- name: image-service
image: beclab/image-service:0.3.46
image: beclab/image-service:0.3.47
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0

View File

@@ -5,6 +5,7 @@ metadata:
namespace: {{ .Release.Namespace }}
labels:
app: argoworkflows
applications.app.bytetrade.io/author: bytetrade.io
app.kubernetes.io/managed-by: Helm
annotations:
applications.app.bytetrade.io/icon: https://argoproj.github.io/argo-workflows/assets/logo.png

View File

@@ -5,6 +5,7 @@ metadata:
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/component: workflow-controller
applications.app.bytetrade.io/author: bytetrade.io
app.kubernetes.io/instance: argo
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: argoworkflows-workflow-controller

View File

@@ -44,6 +44,7 @@ data:
pg_password: {{ $pg_password }}
nats_password: {{ $nats_password }}
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
@@ -64,7 +65,6 @@ spec:
databases:
- name: authelia
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
@@ -168,14 +168,6 @@ data:
- domain: 'example.com'
authelia_url: https://authelia-svc.example.com/
redis:
host: authelia-storage-svc
port: 6379
# This secret can also be set using the env variables AUTHELIA_SESSION_REDIS_PASSWORD_FILE
password: {{ $redis_password | b64dec }}
maximum_active_connections: 100
minimum_idle_connections: 30
regulation:
max_retries: 3
find_time: 120
@@ -191,7 +183,7 @@ data:
username: authelia_os_framework
password: {{ $pg_password | b64dec }}
timeout: 5s
notifier:
disable_startup_check: false
filesystem:
@@ -383,12 +375,6 @@ spec:
value: {{ $pg_password | b64dec }}
- name: PGDB
value: os_framework_authelia
- args:
- -it
- authelia-storage-svc:6379,redis.kubesphere-system:6379
image: owncloudci/wait-for:latest
imagePullPolicy: IfNotPresent
name: check-redis
- name: setsysctl
image: 'busybox:1.28'
command:
@@ -403,7 +389,7 @@ spec:
privileged: true
containers:
- name: authelia
image: beclab/auth:0.2.6
image: beclab/auth:0.2.10
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9091
@@ -427,7 +413,7 @@ spec:
key: nats_password
name: authelia-secrets
- name: NATS_SUBJECT
value: "terminus.{{ .Release.Namespace }}.system.notification"
value: "os.notification"
volumeMounts:
- name: config
@@ -464,107 +450,3 @@ spec:
name: authelia
port: 9091
targetPort: 9091
---
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-config
namespace: {{ .Release.Namespace }}
labels:
app: redis
data:
redis.conf: |-
dir /srv
port 6379
bind 0.0.0.0
appendonly yes
daemonize no
#protected-mode no
requirepass {{ $redis_password | b64dec }}
pidfile /srv/redis-6379.pid
maxmemory 200000000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: authelia-storage
namespace: {{ .Release.Namespace }}
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
priorityClassName: "system-cluster-critical"
containers:
- name: redis
image: redis:6.2.13-alpine3.18
imagePullPolicy: IfNotPresent
command:
- "sh"
- "-c"
- "redis-server /usr/local/redis/redis.conf"
ports:
- containerPort: 6379
resources:
requests:
cpu: 20m
memory: 100Mi
limits:
cpu: 500m
memory: 256Mi
# FIXME: bugs in raspbian
# livenessProbe:
# tcpSocket:
# port: 6379
# initialDelaySeconds: 300
# timeoutSeconds: 1
# periodSeconds: 10
# successThreshold: 1
# failureThreshold: 3
# readinessProbe:
# tcpSocket:
# port: 6379
# initialDelaySeconds: 5
# timeoutSeconds: 1
# periodSeconds: 10
# successThreshold: 1
# failureThreshold: 3
volumeMounts:
- name: config
mountPath: /usr/local/redis/redis.conf
subPath: redis.conf
- name: data
mountPath: /srv
volumes:
- name: config
configMap:
name: redis-config
- name: data
hostPath:
type: DirectoryOrCreate
path: '{{ $auth_rootpath }}'
---
apiVersion: v1
kind: Service
metadata:
name: authelia-storage-svc
namespace: {{ .Release.Namespace }}
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: redis
type: ClusterIP

View File

@@ -223,12 +223,6 @@ spec:
serviceAccountName: bytetrade-controller
priorityClassName: "system-cluster-critical"
initContainers:
- args:
- -it
- authelia-backend.os-framework:9091
image: owncloudci/wait-for:latest
imagePullPolicy: IfNotPresent
name: check-auth
- name: init-userspace
image: busybox:1.28
volumeMounts:
@@ -328,7 +322,7 @@ spec:
apiVersion: v1
fieldPath: spec.nodeName
- name: ingress
image: beclab/bfl-ingress:v0.3.7
image: beclab/bfl-ingress:v0.3.8
imagePullPolicy: IfNotPresent
volumeMounts:
- name: ngxlog

View File

@@ -247,6 +247,15 @@ spec:
labels:
app: tailscale
spec:
affinity:
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app: headscale
topologyKey: kubernetes.io/hostname
weight: 100
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:

View File

@@ -96,7 +96,7 @@ metadata:
namespace: {{ .Release.Namespace }}
spec:
app: knowledge
appNamespace: {{ .Release.Namespace }}
appNamespace: os
middleware: nats
nats:
password:
@@ -105,13 +105,6 @@ spec:
key: nat_password
name: knowledge-secrets
refs:
- appName: download
appNamespace: os
subjects:
- name: download_status
perm:
- pub
- sub
- appName: user-service
appNamespace: os
subjects:
@@ -120,7 +113,7 @@ spec:
- pub
- sub
subjects:
- name: knowledge
- name: download_status
permission:
pub: allow
sub: allow
@@ -190,7 +183,7 @@ spec:
value: os_framework_knowledge
containers:
- name: knowledge
image: "beclab/knowledge-base-api:v0.12.10"
image: "beclab/knowledge-base-api:v0.12.12"
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
@@ -252,7 +245,7 @@ spec:
memory: 1Gi
- name: backend-server
image: "beclab/recommend-backend:v0.12.4"
image: "beclab/recommend-backend:v0.12.6"
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
@@ -283,7 +276,7 @@ spec:
- name: WATCH_DIR
value: /data/
- name: YT_DLP_API_URL
value: http://download-svc.os-framework:3082/api/v1/get_metadata
value: http://download-svc.os-framework:3082/api
- name: DOWNLOAD_API_URL
value: http://download-svc.os-framework:3080/api
volumeMounts:
@@ -400,42 +393,7 @@ spec:
- protocol: TCP
name: knowledge-api
port: 3010
targetPort: 3010
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: download-nat
namespace: {{ .Release.Namespace }}
spec:
app: download
appNamespace: {{ .Release.Namespace }}
middleware: nats
nats:
password:
valueFrom:
secretKeyRef:
key: nat_password
name: knowledge-secrets
refs:
- appName: user-service
appNamespace: user
subjects:
- name: knowledge.*
perm:
- pub
- sub
subjects:
- name: download_status
permission:
pub: allow
sub: allow
export:
- appName: knowledge
sub: allow
pub: allow
user: {{ .Release.Namespace }}-download
targetPort: 3010
---
@@ -529,7 +487,7 @@ spec:
cpu: "1"
memory: 300Mi
- name: yt-dlp
image: "beclab/yt-dlp:v0.12.5"
image: "beclab/yt-dlp:v0.12.7"
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
@@ -557,11 +515,11 @@ spec:
- name: NATS_PORT
value: "4222"
- name: NATS_USERNAME
value: {{ .Release.Namespace }}-download
value: os-knowledge
- name: NATS_PASSWORD
value: {{ $nat_password | b64dec }}
- name: NATS_SUBJECT
value: terminus.{{ .Release.Namespace }}.download_status
value: os.download_status
volumeMounts:
- name: config-dir
mountPath: /app/config
@@ -575,7 +533,7 @@ spec:
cpu: "1"
memory: 300Mi
- name: download-spider
image: "beclab/download-spider:v0.12.6"
image: "beclab/download-spider:v0.12.8"
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
@@ -601,7 +559,7 @@ spec:
- name: NATS_PORT
value: "4222"
- name: NATS_USERNAME
value: {{ .Release.Namespace }}-download
value: os-knowledge
- name: NATS_PASSWORD
value: {{ $nat_password | b64dec }}
- name: NATS_SUBJECT

View File

@@ -0,0 +1,275 @@
{{- $market_secret := (lookup "v1" "Secret" .Release.Namespace "market-secrets") -}}
{{- $redis_password := "" -}}
{{ if $market_secret -}}
{{ $redis_password = (index $market_secret "data" "redis-passwords") }}
{{ else -}}
{{ $redis_password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $market_backend_nats_secret := (lookup "v1" "Secret" .Release.Namespace "market-backend-nats-secret") -}}
{{- $nats_password := "" -}}
{{ if $market_backend_nats_secret -}}
{{ $nats_password = (index $market_backend_nats_secret "data" "nats_password") }}
{{ else -}}
{{ $nats_password = randAlphaNum 16 | b64enc }}
{{- end -}}
{{- $pg_secret := (lookup "v1" "Secret" .Release.Namespace "market-pg-secrets") -}}
{{- $pg_password := "" -}}
{{ if $pg_secret -}}
{{ $pg_password = (index $pg_secret "data" "pg_password") }}
{{ else -}}
{{ $pg_password = randAlphaNum 16 | b64enc }}
{{- end -}}
---
apiVersion: v1
kind: Secret
metadata:
name: market-backend-nats-secret
namespace: {{ .Release.Namespace }}
type: Opaque
data:
nats_password: {{ $nats_password }}
---
apiVersion: v1
kind: Secret
metadata:
name: market-secrets
namespace: {{ .Release.Namespace }}
type: Opaque
data:
redis-passwords: {{ $redis_password }}
---
apiVersion: v1
kind: Secret
metadata:
name: market-pg-secrets
namespace: {{ .Release.Namespace }}
type: Opaque
data:
pg-passwords: {{ $redis_password }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: market-deployment
namespace: {{ .Release.Namespace }}
labels:
app: appstore
applications.app.bytetrade.io/author: bytetrade.io
spec:
replicas: 1
selector:
matchLabels:
app: appstore
template:
metadata:
labels:
app: appstore
io.bytetrade.app: "true"
annotations:
instrumentation.opentelemetry.io/inject-go: "olares-instrumentation"
instrumentation.opentelemetry.io/go-container-names: "appstore-backend"
instrumentation.opentelemetry.io/otel-go-auto-target-exe: "/opt/app/market"
spec:
priorityClassName: "system-cluster-critical"
initContainers:
- args:
- -it
- authelia-backend.os-framework:9091
image: owncloudci/wait-for:latest
imagePullPolicy: IfNotPresent
name: check-auth
containers:
- name: appstore-backend
image: beclab/market-backend:v0.4.3
imagePullPolicy: IfNotPresent
ports:
- containerPort: 81
env:
{{- range $key, $val := .Values.terminusGlobalEnvs }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end }}
- name: APP_SOTRE_SERVICE_SERVICE_PORT
value: "443"
- name: APP_SERVICE_SERVICE_HOST
value: app-service
- name: APP_SERVICE_SERVICE_PORT
value: "6755"
- name: REPO_URL_PORT
value: "82/"
- name: REPO_URL_HOST
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: NATS_HOST
value: nats.os-platform
- name: NATS_PORT
value: "4222"
- name: NATS_USERNAME
value: os-market-backend
- name: NATS_PASSWORD
valueFrom:
secretKeyRef:
key: nats_password
name: market-backend-nats-secret
- name: SYNCER_REMOTE
value: https://appstore-china-server-prod.api.jointerminus.cn
- name: REDIS_HOST
value: redis-cluster-proxy.os-platform
- name: REDIS_PORT
value: "6379"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
key: redis-passwords
name: market-secrets
- name: REDIS_DB_NUMBER
value: "0"
- name: POSTGRES_HOST
value: citus-headless.os-platform
- name: POSTGRES_PORT
value: "5432"
- name: POSTGRES_DB
value: os_framework_market
- name: POSTGRES_USER
value: market_os_system
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
key: pg-passwords
name: market-pg-secrets
- name: API_HASH_PATH
value: /api/v1/appstore/hash
- name: API_DATA_PATH
value: /api/v1/appstore/info
- name: API_CHART_PATH
value: /api/v1/applications/{chart_name}/chart
- name: API_DETAIL_PATH
value: /api/v1/applications/info
- name: CHART_ROOT
value: /opt/app/data/v2
- name: NATS_SUBJECT_SYSTEM_USER_STATE
value: os.users.*
- name: GO_ENV
value: prod
volumeMounts:
- name: opt-data
mountPath: /opt/app/data
volumes:
- name: opt-data
hostPath:
path: '{{ .Values.rootPath }}/userdata/Cache/market'
type: DirectoryOrCreate
- name: app
emptyDir: {}
- name: nginx-confd
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: appstore-service
namespace: {{ .Release.Namespace }}
spec:
selector:
app: appstore
type: ClusterIP
ports:
- protocol: TCP
name: appstore-backend
port: 81
targetPort: 8080
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: market-redis
namespace: {{ .Release.Namespace }}
spec:
app: market
appNamespace: {{ .Release.Namespace }}
middleware: redis
redis:
password:
valueFrom:
secretKeyRef:
key: redis-passwords
name: market-secrets
namespace: market
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: market-pg
namespace: {{ .Release.Namespace }}
spec:
app: market
appNamespace: {{ .Release.Namespace }}
middleware: postgres
postgreSQL:
user: market_os_system
password:
valueFrom:
secretKeyRef:
key: pg-passwords
name: market-pg-secrets
databases:
- name: market
---
apiVersion: v1
kind: Service
metadata:
name: appstore-svc
namespace: {{ .Release.Namespace }}
spec:
type: ClusterIP
selector:
app: appstore
ports:
- name: "appstore-backend"
protocol: TCP
port: 81
targetPort: 8080
- name: "appstore-websocket"
protocol: TCP
port: 40010
targetPort: 40010
---
apiVersion: apr.bytetrade.io/v1alpha1
kind: MiddlewareRequest
metadata:
name: market-backend-nats
namespace: {{ .Release.Namespace }}
spec:
app: market-backend
appNamespace: os
middleware: nats
nats:
password:
valueFrom:
secretKeyRef:
key: nats_password
name: market-backend-nats-secret
refs:
- appName: user-service
appNamespace: os
subjects:
- name: "application.*"
perm:
- pub
- sub
- appName: user-service
appNamespace: os
subjects:
- name: "market.*"
perm:
- pub
- sub
user: os-market-backend

View File

@@ -26,6 +26,14 @@ spec:
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: TERMINUSD_HOST
value: $(NODE_IP):18088
---
apiVersion: v1

View File

@@ -156,7 +156,7 @@ spec:
value: os_framework_notifications
containers:
- name: notifications-api
image: beclab/notifications-api:v1.12.6
image: beclab/notifications-api:v1.12.7
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3010

View File

@@ -3,7 +3,7 @@ target: prebuilt
output:
containers:
-
name: beclab/reverse-proxy:v0.1.8
name: beclab/reverse-proxy:v0.1.10

View File

@@ -376,11 +376,11 @@ spec:
key: nats_password
name: seahub-nats-secrets
- name: NATS_SUBJECT_SYSTEM_SEAHUB
value: terminus.os-framework.system.seahub
value: os.seahub
- name: NATS_SUBJECT_SYSTEM_USERS
value: terminus.os-framework.system.users
value: os.users
- name: NATS_SUBJECT_SYSTEM_GROUPS
value: terminus.os-framework.system.groups
value: os.groups
volumeMounts:
- name: sync-data
mountPath: /shared

View File

@@ -196,6 +196,8 @@ spec:
labels:
app: search3
spec:
serviceAccount: os-internal
serviceAccountName: os-internal
volumes:
- name: userspace-dir
hostPath:
@@ -238,11 +240,17 @@ spec:
value: os_framework_search3
containers:
- name: search3
image: beclab/search3:v0.0.45
image: beclab/search3:v0.0.50
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: TERMINUSD_HOST
value: $(NODE_IP):18088
- name: DATABASE_URL
value: postgres://search3_os_framework:{{ $pg_password | b64dec }}@citus-0.citus-headless.os-platform:5432/os_framework_search3
- name: NATS_HOST
@@ -257,17 +265,54 @@ spec:
key: nats_password
name: search-server-nats-secret
- name: NATS_SUBJECT_SYSTEM_SEARCH
value: terminus.os-framework.system.search
value: os.search
- name: NATS_SUBJECT_SYSTEM_USERS
value: terminus.os-framework.system.users
value: os.users
- name: NATS_SUBJECT_SYSTEM_GROUPS
value: terminus.os-framework.system.groups
value: os.groups
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: search3monitor
namespace: {{ .Release.Namespace }}
labels:
app: search3monitor
spec:
selector:
matchLabels:
app: search3monitor
template:
metadata:
labels:
app: search3monitor
spec:
serviceAccountName: os-internal
containers:
- name: search3monitor
image: beclab/search3monitor:v0.0.45
image: beclab/search3monitor:v0.0.50
imagePullPolicy: IfNotPresent
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: TERMINUSD_HOST
value: $(NODE_IP):18088
- name: DATABASE_URL
value: postgres://search3_os_framework:{{ $pg_password | b64dec }}@citus-0.citus-headless.os-platform:5432/os_framework_search3
- name: SEARCH3_SERVER_ADDRESS
value: search3.os-framework:80
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
volumeMounts:
- name: fb-data
mountPath: /appdata
@@ -282,6 +327,24 @@ spec:
privileged: true
runAsUser: 0
allowPrivilegeEscalation: true
volumes:
- name: userspace-dir
hostPath:
path: /olares/rootfs/userspace
type: Directory
- name: fb-data
hostPath:
path: /olares/userdata/Cache/files
type: DirectoryOrCreate
- name: upload-appdata
hostPath:
path: /olares/userdata/Cache
type: DirectoryOrCreate
- name: shared-lib
hostPath:
path: /olares/share
type: Directory
---
apiVersion: v1
kind: Service

View File

@@ -51,7 +51,7 @@ spec:
priorityClassName: "system-cluster-critical"
containers:
- name: system-server
image: beclab/system-server:0.1.23
image: beclab/system-server:0.1.24
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80

View File

@@ -138,7 +138,7 @@ spec:
key: nats_password
name: vault-server-nats-secret
- name: NATS_SUBJECT_SYSTEM_VAULT
value: "terminus.{{ .Release.Namespace }}.system.vault"
value: "os.vault"
volumeMounts:
- name: vault-data
mountPath: /padloc/packages/server/data

View File

@@ -83,14 +83,14 @@ data:
DCGM_FI_DEV_ROW_REMAP_FAILURE, gauge, Whether remapping of rows has failed
# DCP metrics
DCGM_FI_PROF_GR_ENGINE_ACTIVE, gauge, Ratio of time the graphics engine is active.
# DCGM_FI_PROF_GR_ENGINE_ACTIVE, gauge, Ratio of time the graphics engine is active.
# DCGM_FI_PROF_SM_ACTIVE, gauge, The ratio of cycles an SM has at least 1 warp assigned.
# DCGM_FI_PROF_SM_OCCUPANCY, gauge, The ratio of number of warps resident on an SM.
DCGM_FI_PROF_PIPE_TENSOR_ACTIVE, gauge, Ratio of cycles the tensor (HMMA) pipe is active.
DCGM_FI_PROF_DRAM_ACTIVE, gauge, Ratio of cycles the device memory interface is active sending or receiving data.
# DCGM_FI_PROF_PIPE_TENSOR_ACTIVE, gauge, Ratio of cycles the tensor (HMMA) pipe is active.
# DCGM_FI_PROF_DRAM_ACTIVE, gauge, Ratio of cycles the device memory interface is active sending or receiving data.
# DCGM_FI_PROF_PIPE_FP64_ACTIVE, gauge, Ratio of cycles the fp64 pipes are active.
# DCGM_FI_PROF_PIPE_FP32_ACTIVE, gauge, Ratio of cycles the fp32 pipes are active.
# DCGM_FI_PROF_PIPE_FP16_ACTIVE, gauge, Ratio of cycles the fp16 pipes are active.
DCGM_FI_PROF_PCIE_TX_BYTES, counter, The number of bytes of active pcie tx data including both header and payload.
DCGM_FI_PROF_PCIE_RX_BYTES, counter, The number of bytes of active pcie rx data including both header and payload.
# DCGM_FI_PROF_PCIE_TX_BYTES, counter, The number of bytes of active pcie tx data including both header and payload.
# DCGM_FI_PROF_PCIE_RX_BYTES, counter, The number of bytes of active pcie rx data including both header and payload.
{{- end }}

View File

@@ -17,7 +17,7 @@ spec:
gpu.bytetrade.io/cuda-supported: 'true'
containers:
- name: gpu-scheduler
image: beclab/gpu-scheduler:v0.1.1
image: beclab/gpu-scheduler:v0.1.2
imagePullPolicy: IfNotPresent
ports:
- name: ws

Some files were not shown because too many files have changed in this diff Show More