Add official Sure Helm chart with HA Postgres/Redis support (#429)

* Add Helm chart for Sure Rails app deployment.

- Introduced initial Helm chart structure for deploying the Sure Rails app with Sidekiq on Kubernetes.
- Added optional CloudNativePG and Redis-Operator subcharts for high availability of PostgreSQL and Redis.
- Implemented configuration guards for mutual exclusivity between Redis operators.
- Included support for Horizontal Pod Autoscalers (HPAs) for web and worker deployments.
- Added default configurations for CronJobs, database migrations, and Ingress setup.
- Generated NOTES.txt for deployment guidance and troubleshooting.
- Added example profiles for simple and high-availability hosting setups in README.md.
- Enhanced templates with helper functions for reusable logic and secret management.

* Refactor Helm chart to use shared _env.tpl helper for environment variable injection.

- Added `_env.tpl` for managing environment variables across workloads (web, worker, jobs, etc.).
- Replaced repetitive inline environment configurations with reusable `sure.env` helper.
- Enhanced `redis-simple` configurations with support for dynamic persistence settings and resource limits.
- Updated `values.yaml` with improved defaults for multi-node cluster setups.
- Extended cleanup scripts to handle RedisSentinel CRs.

* Refactor Helm chart templates for consistency and improved readability

- Simplified `simplefin-backfill-job.yaml` by quoting backfill args for cleaner rendering.
- Removed unused `extraEnvFrom` logic from `_env.tpl`.
- Streamlined `redis-simple-deployment.yaml` by restructuring `volumeMounts` and `volumes` blocks for better condition handling.

* Bump Sure Helm chart version to 1.0.0 for initial stable release.

* Update README: Redis subchart to use OT redis-operator and improve secret management examples.

- Replaced `dandydev/redis-ha` with `OT-CONTAINER-KIT redis-operator`
- Added Redis secret configurations examples for flexible secret management.
- Updated README with new Redis configuration instructions, examples, and auto-wiring precedence adjustments.

* Enhance Redis-Operator Helm chart with managed scheduling, topology spreading, and fallback logic

- Introduced `managed.*` fields for optional RedisReplication configurations, prioritizing them over top-level settings.
- Added support for `nodeSelector`, `affinity`, `tolerations`, `topologySpreadConstraints`, and customized `workloadResources` for Redis pods.
- Updated default Redis image to `v8.4.0` in templates.
- Improved persistence configuration with fallback support.
- Updated README and values.yaml with examples and guidance for high-availability setups.
- Enhanced CNPG chart with scheduling options for consistency.

* Update README with improved Redis-Operator usage examples and secret placeholder guidance

- Added instructions for constructing `REDIS_URL` in Kubernetes manifests using placeholders.
- Replaced sensitive values in example secrets with non-sensitive placeholders (`__SET_SECRET__`).
- Included notes on linting Helm templates and YAML to avoid false-positive CI errors.

---------

Co-authored-by: Josh Waldrep <joshua.waldrep5+github@gmail.com>
This commit is contained in:
LPW
2025-12-13 11:52:35 -05:00
committed by GitHub
parent e044d240a1
commit cd2b58fa30
26 changed files with 2203 additions and 0 deletions

34
charts/sure/Chart.yaml Normal file
View File

@@ -0,0 +1,34 @@
apiVersion: v2
name: sure
description: Official Helm chart for deploying the Sure Rails app (web + Sidekiq) on Kubernetes with optional HA PostgreSQL (CloudNativePG) and Redis.
type: application
version: 1.0.0
appVersion: "0.6.5"
kubeVersion: ">=1.25.0-0"
home: https://github.com/we-promise/sure
sources:
- https://github.com/we-promise/sure
keywords:
- rails
- sidekiq
- personal-finance
- cloudnativepg
- redis
maintainers:
- name: Sure maintainers
url: https://github.com/we-promise/sure
# Optional subcharts for turnkey self-hosting
# Users must `helm repo add` the below repositories before installing this chart.
dependencies:
- name: cloudnative-pg
version: "0.22.0"
repository: "https://cloudnative-pg.github.io/charts"
condition: cnpg.enabled
- name: redis-operator
alias: redisOperator
version: "~0.21.0"
repository: "https://ot-container-kit.github.io/helm-charts"
condition: redisOperator.enabled

679
charts/sure/README.md Normal file
View File

@@ -0,0 +1,679 @@
# Sure Helm Chart
Official Helm chart for deploying the Sure Rails application on Kubernetes. It supports web (Rails) and worker (Sidekiq) workloads, optional in-cluster PostgreSQL (CloudNativePG) and Redis subcharts for turnkey self-hosting, and production-grade features like pre-upgrade migrations, pod security contexts, HPAs, and optional ServiceMonitor.
## Features
- Web (Rails) Deployment + Service and optional Ingress
- Worker (Sidekiq) Deployment
- Optional Helm-hook Job for db:migrate, or initContainer migration strategy
- Optional post-install/upgrade SimpleFin encryption backfill Job (idempotent; dry-run by default)
- Optional CronJobs for custom tasks
- Optional subcharts
- CloudNativePG (operator) + Cluster CR for PostgreSQL with HA support
- OT-CONTAINER-KIT redis-operator for Redis HA (replication by default, optional Sentinel)
- Security best practices: runAsNonRoot, readOnlyRootFilesystem, optional existingSecret, no hardcoded secrets
- Scalability
- Replicas (web/worker), resources, topology spread constraints
- Optional HPAs for web/worker
- Affinity, nodeSelector, tolerations
## Requirements
- Kubernetes >= 1.25
- Helm >= 3.10
- For subcharts: add repositories first
```sh
helm repo add cloudnative-pg https://cloudnative-pg.github.io/charts
helm repo add ot-helm https://ot-container-kit.github.io/helm-charts
helm repo update
```
## Quickstart (turnkey self-hosting)
This installs CNPG operator + a Postgres cluster and Redis managed by the OT redis-operator (replication mode by default). It also creates an app Secret if you provide values under `rails.secret.values` (recommended for quickstart only; prefer an existing Secret or External Secrets in production).
Important: For production stability, use immutable image tags (for example, set `image.tag=v1.2.3`) instead of `latest`.
```sh
# Namespace
kubectl create ns sure || true
# Install chart (example: provide SECRET_KEY_BASE and pin an immutable image tag)
helm upgrade --install sure charts/sure \
-n sure \
--set image.tag=v1.2.3 \
--set rails.secret.enabled=true \
--set rails.secret.values.SECRET_KEY_BASE=$(openssl rand -hex 32)
```
Expose the app via an Ingress (see values) or `kubectl port-forward svc/sure 8080:80 -n sure`.
## Using external Postgres/Redis
Disable the bundled CNPG/Redis resources and set URLs explicitly.
```yaml
cnpg:
enabled: false
redisOperator:
managed:
enabled: false
redisSimple:
enabled: false
rails:
extraEnv:
DATABASE_URL: postgresql://user:pass@db.example.com:5432/sure
REDIS_URL: redis://:pass@redis.example.com:6379/0
```
## Installation profiles
### Deployment modes
| Mode | Description | Key values |
|------------------------------|-------------------------------------------|----------------------------------------------------------------------------|
| Simple single-node | All-in-one, minimal HA | `cnpg.cluster.instances=1`, `redisOperator.mode=replication` |
| HA self-hosted (replication) | CNPG + RedisReplication spread over nodes | `cnpg.cluster.instances=3`, `redisOperator.mode=replication` |
| HA self-hosted (Sentinel) | Replication + Sentinel failover layer | `redisOperator.mode=sentinel`, `redisOperator.sentinel.enabled=true` |
| External DB/Redis | Use managed Postgres/Redis | `cnpg.enabled=false`, `redisOperator.managed.enabled=false`, set URLs envs |
Below are example value stubs you can start from, depending on whether you want a simple single-node setup or a more HA-oriented k3s cluster.
### Simple single-node / low-resource profile
```yaml
image:
repository: ghcr.io/we-promise/sure
tag: "v1.0.0" # pin a specific version in production
pullPolicy: IfNotPresent
rails:
existingSecret: sure-secrets
encryptionEnv:
enabled: true
settings:
SELF_HOSTED: "true"
cnpg:
enabled: true
cluster:
enabled: true
name: sure-db
instances: 1
storage:
size: 8Gi
storageClassName: longhorn
redisOperator:
enabled: true
managed:
enabled: true
mode: replication
sentinel:
enabled: false
replicas: 3
persistence:
enabled: true
className: longhorn
size: 8Gi
migrations:
strategy: job
simplefin:
encryption:
enabled: false # enable + backfill later once you're happy
backfill:
enabled: true
dryRun: true
```
### HA k3s profile (example)
```yaml
cnpg:
enabled: true
cluster:
enabled: true
name: sure-db
instances: 3
storage:
size: 20Gi
storageClassName: longhorn
# Synchronous replication for stronger durability
minSyncReplicas: 1
maxSyncReplicas: 2
# Spread CNPG instances across nodes (adjust selectors for your cluster)
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
cnpg.io/cluster: sure-db
redisOperator:
enabled: true
managed:
enabled: true
mode: replication
sentinel:
enabled: false
replicas: 3
persistence:
enabled: true
className: longhorn
size: 8Gi
migrations:
strategy: job
initContainer:
enabled: true # optional safety net on pod restarts (only migrates when pending)
simplefin:
encryption:
enabled: true
backfill:
enabled: true
dryRun: false
```
## CloudNativePG notes
- The chart configures credentials via `spec.bootstrap.initdb.secret` rather than `managed.roles`. The operator expects the referenced Secret to contain `username` and `password` keys (configurable via values).
- This chart generates the application DB Secret when `cnpg.cluster.secret.enabled=true` using the keys defined at `cnpg.cluster.secret.usernameKey` (default `username`) and `cnpg.cluster.secret.passwordKey` (default `password`). If you use an existing Secret (`cnpg.cluster.existingSecret`), ensure it contains these keys. The Cluster CR references the Secret by name and maps the keys accordingly.
- If the CNPG operator is already installed clusterwide, you may set `cnpg.enabled=false` and keep `cnpg.cluster.enabled=true`. The chart will still render the `Cluster` CR and compute the incluster `DATABASE_URL`.
Additional default hardening:
- `DATABASE_URL` includes `?sslmode=prefer`.
- Init migrations run `db:create || true` before `db:migrate` for firstboot convenience.
## Redis URL and authentication
- When the OT redis-operator is used via this chart (see `redisOperator.managed.enabled=true`), `REDIS_URL` resolves to the operator's stable master service. In shell contexts, this can be expressed as:
- `redis://default:$(REDIS_PASSWORD)@<name>-redis-master.<namespace>.svc.cluster.local:6379/0` (where `<name>` defaults to `<fullname>-redis` but is overrideable via `redisOperator.name`)
For Kubernetes manifests, do not inline shell expansion. Either let this chart construct `REDIS_URL` for you automatically (recommended), or use a literal form with a placeholder password, e.g.:
- `redis://default:<password>@<name>-redis-master.<namespace>.svc.cluster.local:6379/0`
- The `default` username is required with Redis 6+ ACLs. If you explicitly set `REDIS_URL` under `rails.extraEnv`, your value takes precedence.
- The Redis password is taken from `sure.redisSecretName` (typically your app Secret, e.g. `sure-secrets`) using the key returned by `sure.redisPasswordKey` (default `redis-password`).
- If you prefer a simple (nonHA) incluster Redis, disable the operator-managed Redis (`redisOperator.managed.enabled=false`) and enable `redisSimple.enabled`. The chart will deploy a single Redis Pod + Service and wire `REDIS_URL` accordingly. Provide a password via `redisSimple.auth.existingSecret` (recommended) or rely on your app secret mapping.
### Using the OT redis-operator (Sentinel)
This chart can optionally install the OT-CONTAINER-KIT Redis Operator and/or render a `RedisSentinel` CR to manage Redis HA with Sentinel. This approach avoids templating pitfalls and provides stable failover.
Quickstart example (Sentinel, 3 replicas, Longhorn storage, reuse `sure-secrets` password):
```yaml
redisOperator:
enabled: true # install operator subchart (or leave false if already installed cluster-wide)
operator:
resources: # optional: keep the operator light on small k3s nodes
requests:
cpu: 50m
memory: 128Mi
limits:
cpu: 100m
memory: 256Mi
managed:
enabled: true # render a RedisSentinel CR
name: "" # defaults to <fullname>-redis
replicas: 3
auth:
existingSecret: sure-secrets
passwordKey: redis-password
persistence:
className: longhorn
size: 8Gi
```
Notes:
- The operator master service is `<name>-redis-master.<ns>.svc.cluster.local:6379`.
- The CR references your existing password secret via `kubernetesConfig.redisSecret { name, key }`.
- Provider precedence for auto-wiring is: explicit `rails.extraEnv.REDIS_URL` → `redisOperator.managed` → `redisSimple`.
- Only one in-cluster Redis provider should be enabled at a time to avoid ambiguity.
### HA scheduling and topology spreading
For resilient multi-node clusters, enforce one pod per node for critical components. Use `topologySpreadConstraints` with `maxSkew: 1` and `whenUnsatisfiable: DoNotSchedule`. Keep selectors precise to avoid matching other apps.
Examples:
```yaml
cnpg:
cluster:
instances: 3
minSyncReplicas: 1
maxSyncReplicas: 2
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
cnpg.io/cluster: sure-db
redisOperator:
managed:
enabled: true
replicas: 3
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app.kubernetes.io/instance: sure # verify labels on your cluster
```
Security note on label selectors:
- Choose selectors that uniquely match the intended pods to avoid cross-app interference. Good candidates are:
- CNPG: `cnpg.io/cluster: <cluster-name>` (CNPG labels its pods)
- RedisReplication: `app.kubernetes.io/instance: <release-name>` or `app.kubernetes.io/name: <cr-name>`
Compatibility:
- CloudNativePG v1.27.1 supports `minSyncReplicas`/`maxSyncReplicas` and standard k8s scheduling fields under `spec`.
- OT redis-operator v0.21.0 supports scheduling under `spec.kubernetesConfig`.
Testing and verification:
```bash
# Dry-run render with your values
helm template sure charts/sure -n sure -f ha-values.yaml --debug > rendered.yaml
# Install/upgrade in a test namespace
kubectl create ns sure-test || true
helm upgrade --install sure charts/sure -n sure-test -f ha-values.yaml --wait
# Verify CRs include your scheduling config
kubectl get cluster.postgresql.cnpg.io sure-db -n sure-test -o yaml \
| yq '.spec | {instances, minSyncReplicas, maxSyncReplicas, nodeSelector, affinity, tolerations, topologySpreadConstraints}'
# Default RedisReplication CR name is <fullname>-redis (e.g., sure-redis) unless overridden by redisOperator.name
kubectl get redisreplication sure-redis -n sure-test -o yaml \
| yq '.spec.kubernetesConfig | {nodeSelector, affinity, tolerations, topologySpreadConstraints}'
# After upgrade, trigger a gentle reschedule to apply spreads
# CNPG: delete one pod at a time or perform a switchover
kubectl delete pod -n sure-test -l cnpg.io/cluster=sure-db --wait=false --field-selector=status.phase=Running
# RedisReplication: delete one replica pod to let the operator recreate it under new constraints
kubectl delete pod -n sure-test -l app.kubernetes.io/component=redis --wait=false
# Confirm distribution across nodes
kubectl get pods -n sure-test -o wide
```
## Example app Secret (sure-secrets)
You will typically manage secrets via an external mechanism (External Secrets, Sealed Secrets, etc.), but for reference, below is an example `Secret` that provides the keys this chart expects by default:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: sure-secrets
type: Opaque
stringData:
# Rails secrets
SECRET_KEY_BASE: "__SET_SECRET__"
# Active Record Encryption keys (optional but recommended when using encryption features)
ACTIVE_RECORD_ENCRYPTION_PRIMARY_KEY: "__SET_SECRET__"
ACTIVE_RECORD_ENCRYPTION_DETERMINISTIC_KEY: "__SET_SECRET__"
ACTIVE_RECORD_ENCRYPTION_KEY_DERIVATION_SALT: "__SET_SECRET__"
# Redis password used by operator-managed or simple Redis
redis-password: "__SET_SECRET__"
# Optional: CNPG bootstrap user/password if you are not letting the chart generate them
# username: "sure"
# password: "__SET_SECRET__"
```
Note: These are non-sensitive placeholder values. Do not commit real secrets to version control. Prefer External Secrets, Sealed Secrets, or your platforms secret manager to source these at runtime.
### Linting Helm templates and YAML
Helm template files under `charts/**/templates/**` contain template delimiters like `{{- ... }}` that raw YAML linters will flag as invalid. To avoid false positives in CI:
- Use Helms linter for charts:
- `helm lint charts/sure`
- Configure your YAML linter (e.g., yamllint) to ignore Helm template directories (exclude `charts/**/templates/**`), or use a Helm-aware plugin that preprocesses templates before linting.
You can then point the chart at this Secret via:
```yaml
rails:
existingSecret: sure-secrets
redisOperator:
managed:
enabled: true
auth:
existingSecret: sure-secrets
passwordKey: redis-password
cnpg:
cluster:
existingSecret: sure-secrets # if you are reusing the same Secret for DB creds
secret:
enabled: false # do not generate a second Secret when using existingSecret
```
Environment variable ordering for shells:
- The chart declares `DB_PASSWORD` before `DATABASE_URL` and `REDIS_PASSWORD` before `REDIS_URL` in all workloads so that shell expansion with `$(...)` works reliably.
## Migrations
By default, this chart uses a **Helm hook Job** to prepare the database on **post-install/upgrade** using Rails' `db:prepare`, which will create the database (if needed) and apply migrations in one step. The Job waits for the database to be reachable via `pg_isready` before connecting.
Execution flow:
1. CNPG Cluster (if enabled) and other resources are created.
2. `sure-migrate` Job (post-install/post-upgrade hook) waits for the RW service to accept connections.
3. `db:prepare` runs; safe and idempotent across fresh installs and upgrades.
4. Optional data backfills (like SimpleFin encryption) run in their own post hooks.
To use the initContainer strategy instead (or in addition as a safety net):
```yaml
migrations:
strategy: initContainer
initContainer:
enabled: true
```
## SimpleFin encryption backfill
- SimpleFin encryption is optional. If you enable it, you must provide Active Record Encryption keys.
- The backfill Job runs a safe, idempotent Rake task to encrypt existing `access_url` values.
```yaml
simplefin:
encryption:
enabled: true
backfill:
enabled: true
dryRun: true # set false to actually write changes
rails:
# Provide encryption keys via an existing secret or below values (for testing only)
existingSecret: my-app-secret
# or
secret:
enabled: true
values:
ACTIVE_RECORD_ENCRYPTION_PRIMARY_KEY: "..."
ACTIVE_RECORD_ENCRYPTION_DETERMINISTIC_KEY: "..."
ACTIVE_RECORD_ENCRYPTION_KEY_DERIVATION_SALT: "..."
```
## Ingress
```yaml
ingress:
enabled: true
className: "nginx"
hosts:
- host: finance.example.com
paths:
- path: /
pathType: Prefix
tls:
- hosts: [finance.example.com]
secretName: finance-tls
```
## Boot-required secrets (self-hosted)
In self-hosted mode the Rails initializer for Active Record Encryption loads on boot. To prevent boot crashes, ensure the following environment variables are present for ALL workloads (web, worker, migrate job/initContainer, CronJobs, and the SimpleFin backfill job):
- `SECRET_KEY_BASE`
- `ACTIVE_RECORD_ENCRYPTION_PRIMARY_KEY`
- `ACTIVE_RECORD_ENCRYPTION_DETERMINISTIC_KEY`
- `ACTIVE_RECORD_ENCRYPTION_KEY_DERIVATION_SALT`
This chart wires these from your app Secret using `secretKeyRef`. Provide them via `rails.existingSecret` (recommended) or `rails.secret.values` (for testing only).
The injection of the three Active Record Encryption env vars can be toggled via:
```yaml
rails:
encryptionEnv:
enabled: true # set to false to skip injecting the three AR encryption env vars
```
Note: Even if `simplefin.encryption.enabled=false`, the app initializer expects these env vars to exist in self-hosted mode.
## Advanced environment variable injection
For simple string key/value envs, continue to use `rails.extraEnv` and the per-workload `web.extraEnv` / `worker.extraEnv` maps.
When you need `valueFrom` (e.g., Secret/ConfigMap references) or full EnvVar objects, use the new arrays:
```yaml
rails:
extraEnvVars:
- name: SOME_FROM_SECRET
valueFrom:
secretKeyRef:
name: my-secret
key: some-key
extraEnvFrom:
- secretRef:
name: another-secret
```
These are injected into web, worker, migrate job/initContainer, CronJobs, and the SimpleFin backfill job in addition to the simple maps.
## Writable filesystem and /tmp
Rails and Sidekiq may require writes to `/tmp` during boot. The chart now defaults to:
```yaml
securityContext:
readOnlyRootFilesystem: false
```
If you choose to enforce a read-only root filesystem, you can mount an ephemeral `/tmp` via:
```yaml
writableTmp:
enabled: true
```
This will add an `emptyDir` volume mounted at `/tmp` for the web and worker pods.
## Local images on k3s/k3d/kind (development workflow)
When using locally built images on single-node k3s/k3d/kind clusters:
- Consider forcing a never-pull policy during development:
```yaml
image:
pullPolicy: Never
```
- Load your local image into the cluster runtime:
- k3s (containerd):
```bash
# Export your image to a tar (e.g., from Docker or podman)
docker save ghcr.io/we-promise/sure:dev -o sure-dev.tar
# Import into each node's containerd
sudo ctr -n k8s.io images import sure-dev.tar
```
- k3d:
```bash
k3d image import ghcr.io/we-promise/sure:dev -c <your-cluster-name>
```
- kind:
```bash
kind load docker-image ghcr.io/we-promise/sure:dev --name <your-cluster-name>
```
- Multi-node clusters require loading the image into every node or pushing to a registry that all nodes can reach.
## HPAs
```yaml
hpa:
web:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 70
worker:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 70
```
## Security Notes
- Never commit secrets in `values.yaml`. Use `rails.existingSecret` or a tool like Sealed Secrets.
- The chart defaults to `runAsNonRoot`, `fsGroup=1000`, and drops all capabilities.
- For production, set resource requests/limits and enable HPAs.
## Values overview
Tip: For production stability, prefer immutable image tags. Set `image.tag` to a specific release (e.g., `v1.2.3`) rather than `latest`.
See `values.yaml` for the complete configuration surface, including:
- `image.*`: repository, tag, pullPolicy, imagePullSecrets
- `rails.*`: environment, extraEnv, existingSecret or secret.values, settings
- Also: `rails.extraEnvVars[]` (full EnvVar), `rails.extraEnvFrom[]` (EnvFromSource), and `rails.encryptionEnv.enabled` toggle
- `cnpg.*`: enable operator subchart and a Cluster resource, set instances, storage
- `redis-ha.*`: enable dandydev/redis-ha subchart and configure replicas/auth (Sentinel/HA); supports `existingSecret` and `existingSecretPasswordKey`
- `redisOperator.*`: optionally install OT redis-operator (`redisOperator.enabled`) and/or render a `RedisSentinel` CR (`redisOperator.managed.enabled`); configure `name`, `replicas`, `auth.existingSecret/passwordKey`, `persistence.className/size`, scheduling knobs, and `operator.resources` (controller) / `workloadResources` (Redis pods)
- `redisSimple.*`: optional singlepod Redis (nonHA) when `redis-ha.enabled=false`
- `web.*`, `worker.*`: replicas, probes, resources, scheduling
- `migrations.*`: strategy job or initContainer
- `simplefin.encryption.*`: enable + backfill options
- `cronjobs.*`: custom CronJobs
- `service.*`, `ingress.*`, `serviceMonitor.*`, `hpa.*`
## Helm tests
After installation, you can run chart tests to verify:
- The web Service responds over HTTP.
- Redis auth works when an in-cluster provider is active.
```sh
helm test sure -n sure
```
The Redis auth test uses `redis-cli -u "$REDIS_URL" -a "$REDIS_PASSWORD" PING` and passes when `PONG` is returned.
Alternatively, you can smoke test from a running worker pod:
```sh
kubectl exec -n sure deploy/$(kubectl get deploy -n sure -o name | grep worker | cut -d/ -f2) -- \
sh -lc 'redis-cli -u "$REDIS_URL" -a "$REDIS_PASSWORD" PING'
```
## Testing locally (k3d/kind)
- Create a cluster (ensure storageclass is available).
- Install chart with defaults (CNPG + Redis included).
- Wait for CNPG Cluster to become Ready, then for Rails web and worker pods to be Ready.
- Port-forward or configure Ingress.
```sh
helm template sure charts/sure -n sure --debug > rendered.yaml # dry-run inspection
helm upgrade --install sure charts/sure -n sure --create-namespace --wait
kubectl get pods -n sure
```
## Uninstall
```sh
helm uninstall sure -n sure
```
## Cleanup & reset (k3s)
For local k3s experimentation its sometimes useful to completely reset the `sure` namespace, especially if CR finalizers or PVCs get stuck.
The script below is a **last-resort tool** for cleaning the namespace. It:
- Uninstalls the Helm release.
- Deletes RedisReplication and CNPG Cluster CRs in the namespace.
- Deletes PVCs.
- Optionally clears finalizers on remaining CRs/PVCs.
- Deletes the namespace.
> ⚠️ Finalizer patching can leave underlying volumes behind if your storage class uses its own finalizers (e.g. Longhorn snapshots). Use with care in production.
```bash
#!/usr/bin/env bash
set -euo pipefail
NAMESPACE=${NAMESPACE:-sure}
RELEASE=${RELEASE:-sure}
echo "[sure-cleanup] Cleaning up Helm release '$RELEASE' in namespace '$NAMESPACE'..."
helm uninstall "$RELEASE" -n "$NAMESPACE" || echo "[sure-cleanup] Helm release not found or already removed."
# 1) Patch finalizers FIRST so deletes don't hang
if kubectl get redisreplication.redis.redis.opstreelabs.in -n "$NAMESPACE" >/dev/null 2>&1; then
echo "[sure-cleanup] Clearing finalizers from RedisReplication CRs..."
for rr in $(kubectl get redisreplication.redis.redis.opstreelabs.in -n "$NAMESPACE" -o name); do
kubectl patch "$rr" -n "$NAMESPACE" -p '{"metadata":{"finalizers":null}}' --type=merge || true
done
fi
if kubectl get redissentinels.redis.redis.opstreelabs.in -n "$NAMESPACE" >/dev/null 2>&1; then
echo "[sure-cleanup] Clearing finalizers from RedisSentinel CRs..."
for rs in $(kubectl get redissentinels.redis.redis.opstreelabs.in -n "$NAMESPACE" -o name); do
kubectl patch "$rs" -n "$NAMESPACE" -p '{"metadata":{"finalizers":null}}' --type=merge || true
done
fi
if kubectl get pvc -n "$NAMESPACE" >/dev/null 2>&1; then
echo "[sure-cleanup] Clearing finalizers from PVCs..."
for pvc in $(kubectl get pvc -n "$NAMESPACE" -o name); do
kubectl patch "$pvc" -n "$NAMESPACE" -p '{"metadata":{"finalizers":null}}' --type=merge || true
done
fi
# 2) Now delete CRs/PVCs without waiting
if kubectl get redisreplication.redis.redis.opstreelabs.in -n "$NAMESPACE" >/dev/null 2>&1; then
echo "[sure-cleanup] Deleting RedisReplication CRs (no wait)..."
kubectl delete redisreplication.redis.redis.opstreelabs.in -n "$NAMESPACE" --all --wait=false || true
fi
if kubectl get redissentinels.redis.redis.opstreelabs.in -n "$NAMESPACE" >/dev/null 2>&1; then
echo "[sure-cleanup] Deleting RedisSentinel CRs (no wait)..."
kubectl delete redissentinels.redis.redis.opstreelabs.in -n "$NAMESPACE" --all --wait=false || true
fi
if kubectl get cluster.postgresql.cnpg.io -n "$NAMESPACE" >/dev/null 2>&1; then
echo "[sure-cleanup] Deleting CNPG Cluster CRs (no wait)..."
kubectl delete cluster.postgresql.cnpg.io -n "$NAMESPACE" --all --wait=false || true
fi
if kubectl get pvc -n "$NAMESPACE" >/dev/null 2>&1; then
echo "[sure-cleanup] Deleting PVCs in namespace $NAMESPACE (no wait)..."
kubectl delete pvc -n "$NAMESPACE" --all --wait=false || true
fi
# 3) Delete namespace
if kubectl get ns "$NAMESPACE" >/dev/null 2>&1; then
echo "[sure-cleanup] Deleting namespace $NAMESPACE..."
kubectl delete ns "$NAMESPACE" --wait=false || true
else
echo "[sure-cleanup] Namespace $NAMESPACE already gone."
fi
echo "[sure-cleanup] Done."
```

View File

@@ -0,0 +1,60 @@
{{- if and .Values.redisOperator.enabled .Values.redisOperator.managed.enabled }}
{{- $name := .Values.redisOperator.name | default (printf "%s-redis" (include "sure.fullname" .)) -}}
{{/* Prefer managed.* if provided; fallback to top-level */}}
{{- $imgRepo := (coalesce .Values.redisOperator.managed.image.repository .Values.redisOperator.image.repository) | default "quay.io/opstree/redis" -}}
{{- $imgTag := (coalesce .Values.redisOperator.managed.image.tag .Values.redisOperator.image.tag) | default "v8.4.0" -}}
{{- $replicas := (coalesce .Values.redisOperator.managed.replicas .Values.redisOperator.replicas) | default 3 -}}
{{- $nodeSelector := (coalesce .Values.redisOperator.managed.nodeSelector .Values.redisOperator.nodeSelector) -}}
{{- $tolerations := (coalesce .Values.redisOperator.managed.tolerations .Values.redisOperator.tolerations) -}}
{{- $affinity := (coalesce .Values.redisOperator.managed.affinity .Values.redisOperator.affinity) -}}
{{- $tsc := (coalesce .Values.redisOperator.managed.topologySpreadConstraints .Values.redisOperator.topologySpreadConstraints) -}}
{{- $workloadResources := (coalesce .Values.redisOperator.managed.workloadResources .Values.redisOperator.workloadResources) -}}
{{- $persistence := (coalesce .Values.redisOperator.managed.persistence .Values.redisOperator.persistence) -}}
apiVersion: redis.redis.opstreelabs.in/v1beta2
kind: RedisReplication
metadata:
name: {{ $name | quote }}
labels:
app.kubernetes.io/component: redis
{{- include "sure.labels" . | nindent 4 }}
spec:
clusterSize: {{ $replicas }}
kubernetesConfig:
image: {{ printf "%s:%s" $imgRepo $imgTag | quote }}
redisSecret:
name: {{ include "sure.redisSecretName" . }}
key: {{ include "sure.redisPasswordKey" . }}
{{- with $workloadResources }}
resources:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with $nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with $tolerations }}
tolerations:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with $affinity }}
affinity:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with $tsc }}
topologySpreadConstraints:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- if and $persistence $persistence.enabled }}
storage:
volumeClaimTemplate:
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: {{ $persistence.size | default "8Gi" }}
{{- if $persistence.className }}
storageClassName: {{ $persistence.className }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,47 @@
Thank you for installing the Sure Helm chart!
Release: {{ .Release.Name }}
Namespace: {{ .Release.Namespace }}
Next steps
----------
1) Wait for dependencies (if enabled):
- CloudNativePG operator/Cluster: wait until the Cluster reports Ready and the RW service is reachable
- Redis (operator-managed replication by default; optional Sentinel mode available):
- Default (replication-only): watch the RedisReplication CR:
kubectl get redisreplication.redis.redis.opstreelabs.in \
{{ default (printf "%s-redis" (include "sure.fullname" .)) .Values.redisOperator.name }} -n {{ .Release.Namespace }} -w
- If you enabled Sentinel mode (`redisOperator.mode=sentinel` and `redisOperator.sentinel.enabled=true`), also watch the RedisSentinel CR:
kubectl get redissentinels.redis.redis.opstreelabs.in \
{{ default (printf "%s-redis" (include "sure.fullname" .)) .Values.redisOperator.name }} -n {{ .Release.Namespace }} -w
- Primary service (for the app):
{{ default (printf "%s-redis" (include "sure.fullname" .)) .Values.redisOperator.name }}-redis-master.{{ .Release.Namespace }}.svc.cluster.local:6379
You can watch pods with:
kubectl get pods -n {{ .Release.Namespace }} -w -l app.kubernetes.io/instance={{ .Release.Name }}
2) Access the application:
- If you enabled Ingress, navigate to the host you configured.
- Otherwise, port-forward the Service:
kubectl port-forward -n {{ .Release.Namespace }} svc/{{ include "sure.fullname" . }} 8080:80
Then open: http://localhost:8080
3) Run database migrations (default behavior):
- By default, a Helm hook Job runs pre-install/upgrade. If you switched to the initContainer strategy, migrations will run on the web pod start.
4) Optional: run Helm tests to verify connectivity:
helm test {{ .Release.Name }} -n {{ .Release.Namespace }}
- Includes a Redis auth smoke test when an in-cluster Redis provider is enabled
Troubleshooting
---------------
- If pods are not Ready, check logs:
kubectl logs deploy/{{ include "sure.fullname" . }}-web -n {{ .Release.Namespace }}
kubectl logs deploy/{{ include "sure.fullname" . }}-worker -n {{ .Release.Namespace }}
- For CloudNativePG, verify the RW service exists and the primary is Ready.
- For redis-operator, verify the RedisSentinel CR reports Ready and that the master service resolves.
Security reminder
-----------------
- For production, prefer immutable image tags (for example, image.tag=v1.2.3) instead of 'latest'.
- Provide secrets via an existing Kubernetes Secret or a secret manager (External Secrets, Sealed Secrets).

View File

@@ -0,0 +1,7 @@
{{/*
Mutual exclusivity and configuration guards
*/}}
{{- if and .Values.redisOperator.managed.enabled .Values.redisSimple.enabled -}}
{{- fail "Invalid configuration: Both redisOperator.managed.enabled and redisSimple.enabled are true. Enable only one in-cluster Redis provider." -}}
{{- end -}}

View File

@@ -0,0 +1,88 @@
{{/*
Shared environment variable helpers for Rails workloads.
Usage examples (indent with nindent in caller):
{{ include "sure.env" (dict "ctx" . "includeDatabase" true "includeRedis" true "extraEnv" .Values.worker.extraEnv "extraEnvFrom" .Values.worker.extraEnvFrom) | nindent 10 }}
The helper always injects:
- RAILS_ENV
- SECRET_KEY_BASE
- optional Active Record Encryption keys (controlled by rails.encryptionEnv.enabled)
- optional DATABASE_URL + DB_PASSWORD (includeDatabase=true and helper can compute a DB URL)
- optional REDIS_URL + REDIS_PASSWORD (includeRedis=true and helper can compute a Redis URL)
- rails.settings / rails.extraEnv / rails.extraEnvVars
- optional additional per-workload env / envFrom blocks via extraEnv / extraEnvFrom.
*/}}
{{- define "sure.env" -}}
{{- $ctx := .ctx -}}
{{- $includeDatabase := default true .includeDatabase -}}
{{- $includeRedis := default true .includeRedis -}}
{{- $extraEnv := .extraEnv | default (dict) -}}
{{- $extraEnvFrom := .extraEnvFrom -}}
- name: RAILS_ENV
value: {{ $ctx.Values.rails.env | quote }}
- name: SECRET_KEY_BASE
valueFrom:
secretKeyRef:
name: {{ include "sure.appSecretName" $ctx }}
key: SECRET_KEY_BASE
{{- if $ctx.Values.rails.encryptionEnv.enabled }}
- name: ACTIVE_RECORD_ENCRYPTION_PRIMARY_KEY
valueFrom:
secretKeyRef:
name: {{ include "sure.appSecretName" $ctx }}
key: ACTIVE_RECORD_ENCRYPTION_PRIMARY_KEY
- name: ACTIVE_RECORD_ENCRYPTION_DETERMINISTIC_KEY
valueFrom:
secretKeyRef:
name: {{ include "sure.appSecretName" $ctx }}
key: ACTIVE_RECORD_ENCRYPTION_DETERMINISTIC_KEY
- name: ACTIVE_RECORD_ENCRYPTION_KEY_DERIVATION_SALT
valueFrom:
secretKeyRef:
name: {{ include "sure.appSecretName" $ctx }}
key: ACTIVE_RECORD_ENCRYPTION_KEY_DERIVATION_SALT
{{- end }}
{{- if $includeDatabase }}
{{- $dburl := include "sure.databaseUrl" $ctx -}}
{{- if $dburl }}
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: {{ include "sure.dbSecretName" $ctx }}
key: {{ include "sure.dbPasswordKey" $ctx }}
- name: DATABASE_URL
value: {{ $dburl | quote }}
{{- end }}
{{- end }}
{{- if $includeRedis }}
{{- $redis := include "sure.redisUrl" $ctx -}}
{{- if $redis }}
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: {{ include "sure.redisSecretName" $ctx }}
key: {{ include "sure.redisPasswordKey" $ctx }}
- name: REDIS_URL
value: {{ $redis | quote }}
{{- end }}
{{- end }}
{{- range $k, $v := $ctx.Values.rails.settings }}
- name: {{ $k }}
value: {{ $v | quote }}
{{- end }}
{{- range $k, $v := $ctx.Values.rails.extraEnv }}
- name: {{ $k }}
value: {{ $v | quote }}
{{- end }}
{{- with $ctx.Values.rails.extraEnvVars }}
{{- toYaml . | nindent 0 }}
{{- end }}
{{- range $k, $v := $extraEnv }}
- name: {{ $k }}
value: {{ $v | quote }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,127 @@
{{/*
Common template helpers for the Sure chart
*/}}
{{- define "sure.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "sure.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- define "sure.labels" -}}
app.kubernetes.io/name: {{ include "sure.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}
{{- define "sure.selectorLabels" -}}
app.kubernetes.io/name: {{ include "sure.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}
{{- define "sure.image" -}}
{{- printf "%s:%s" .Values.image.repository (default .Chart.AppVersion .Values.image.tag) -}}
{{- end -}}
{{- define "sure.serviceAccountName" -}}
{{- include "sure.fullname" . -}}
{{- end -}}
{{/* Compute Rails DATABASE_URL if CNPG cluster is enabled and no override provided */}}
{{- define "sure.databaseUrl" -}}
{{- $explicit := (index .Values.rails.extraEnv "DATABASE_URL") -}}
{{- if $explicit -}}
{{- $explicit -}}
{{- else -}}
{{- if .Values.cnpg.cluster.enabled -}}
{{- $cluster := .Values.cnpg.cluster.name | default (printf "%s-db" (include "sure.fullname" .)) -}}
{{- $user := .Values.cnpg.cluster.appUser | default "sure" -}}
{{- $db := .Values.cnpg.cluster.appDatabase | default "sure" -}}
{{- printf "postgresql://%s:$(DB_PASSWORD)@%s-rw.%s.svc.cluster.local:5432/%s?sslmode=prefer" $user $cluster .Release.Namespace $db -}}
{{- else -}}
{{- "" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/* Compute Redis URL if no explicit override provided */}}
{{- define "sure.redisUrl" -}}
{{- $explicit := (index .Values.rails.extraEnv "REDIS_URL") -}}
{{- if $explicit -}}
{{- $explicit -}}
{{- else -}}
{{- if .Values.redisOperator.managed.enabled -}}
{{- $name := .Values.redisOperator.name | default (printf "%s-redis" (include "sure.fullname" .)) -}}
{{- $host := printf "%s-master.%s.svc.cluster.local" $name .Release.Namespace -}}
{{- printf "redis://default:$(REDIS_PASSWORD)@%s:6379/0" $host -}}
{{- else if .Values.redisSimple.enabled -}}
{{- $host := printf "%s-redis.%s.svc.cluster.local" (include "sure.fullname" .) .Release.Namespace -}}
{{- printf "redis://default:$(REDIS_PASSWORD)@%s:%d/0" $host (.Values.redisSimple.service.port | default 6379) -}}
{{- else -}}
{{- "" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/* Common secret name helpers to avoid complex inline conditionals in env blocks */}}
{{- define "sure.appSecretName" -}}
{{- default (printf "%s-app" (include "sure.fullname" .)) .Values.rails.existingSecret | default (printf "%s-app" (include "sure.fullname" .)) -}}
{{- end -}}
{{- define "sure.dbSecretName" -}}
{{- if .Values.cnpg.cluster.enabled -}}
{{- if .Values.cnpg.cluster.existingSecret -}}
{{- .Values.cnpg.cluster.existingSecret -}}
{{- else -}}
{{- default (printf "%s-db-app" (include "sure.fullname" .)) .Values.cnpg.cluster.secret.name | default (printf "%s-db-app" (include "sure.fullname" .)) -}}
{{- end -}}
{{- else -}}
{{- include "sure.appSecretName" . -}}
{{- end -}}
{{- end -}}
{{- define "sure.dbPasswordKey" -}}
{{- default "password" .Values.cnpg.cluster.secret.passwordKey -}}
{{- end -}}
{{- define "sure.redisSecretName" -}}
{{- if .Values.redisOperator.managed.enabled -}}
{{- if .Values.redisOperator.auth.existingSecret -}}
{{- .Values.redisOperator.auth.existingSecret -}}
{{- else -}}
{{- include "sure.appSecretName" . -}}
{{- end -}}
{{- else if and .Values.redisSimple.enabled .Values.redisSimple.auth.enabled -}}
{{- if .Values.redisSimple.auth.existingSecret -}}
{{- .Values.redisSimple.auth.existingSecret -}}
{{- else -}}
{{- include "sure.appSecretName" . -}}
{{- end -}}
{{- else -}}
{{- include "sure.appSecretName" . -}}
{{- end -}}
{{- end -}}
{{- define "sure.redisPasswordKey" -}}
{{- if .Values.redisOperator.managed.enabled -}}
{{- default "redis-password" .Values.redisOperator.auth.passwordKey -}}
{{- else if and .Values.redisSimple.enabled .Values.redisSimple.auth.enabled -}}
{{- default "redis-password" .Values.redisSimple.auth.passwordKey -}}
{{- else -}}
{{- default "redis-password" .Values.redis.passwordKey -}}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,65 @@
{{- if .Values.cnpg.cluster.enabled }}
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: {{ default (printf "%s-db" (include "sure.fullname" .)) .Values.cnpg.cluster.name }}
labels:
{{- include "sure.labels" . | nindent 4 }}
spec:
instances: {{ .Values.cnpg.cluster.instances | default 1 }}
{{- if .Values.cnpg.cluster.minSyncReplicas }}
minSyncReplicas: {{ .Values.cnpg.cluster.minSyncReplicas }}
{{- end }}
{{- if .Values.cnpg.cluster.maxSyncReplicas }}
maxSyncReplicas: {{ .Values.cnpg.cluster.maxSyncReplicas }}
{{- end }}
storage:
size: {{ .Values.cnpg.cluster.storage.size | default "10Gi" }}
{{- if .Values.cnpg.cluster.storage.storageClassName }}
storageClass: {{ .Values.cnpg.cluster.storage.storageClassName }}
{{- end }}
{{- with .Values.cnpg.cluster.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.cnpg.cluster.affinity }}
affinity:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.cnpg.cluster.tolerations }}
tolerations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.cnpg.cluster.topologySpreadConstraints }}
topologySpreadConstraints:
{{- toYaml . | nindent 4 }}
{{- end }}
bootstrap:
initdb:
database: {{ .Values.cnpg.cluster.appDatabase | default "sure" }}
owner: {{ .Values.cnpg.cluster.appUser | default "sure" }}
secret:
name: {{ .Values.cnpg.cluster.existingSecret | default (default (printf "%s-db-app" (include "sure.fullname" .)) .Values.cnpg.cluster.secret.name) }}
postInitSQL:
- {{ printf "GRANT CONNECT ON DATABASE postgres TO \"%s\";" (.Values.cnpg.cluster.appUser | default "sure") | quote }}
- {{ printf "ALTER ROLE \"%s\" CREATEDB;" (.Values.cnpg.cluster.appUser | default "sure") | quote }}
{{- $secretName := .Values.cnpg.cluster.existingSecret | default (default (printf "%s-db-app" (include "sure.fullname" .)) .Values.cnpg.cluster.secret.name) }}
{{- if not .Values.cnpg.cluster.existingSecret }}
enableSuperuserAccess: false
{{- end }}
monitoring:
enablePodMonitor: false
{{- end }}
---
{{- if and .Values.cnpg.cluster.enabled (and (not .Values.cnpg.cluster.existingSecret) .Values.cnpg.cluster.secret.enabled) }}
apiVersion: v1
kind: Secret
metadata:
name: {{ default (printf "%s-db-app" (include "sure.fullname" .)) .Values.cnpg.cluster.secret.name }}
labels:
{{- include "sure.labels" . | nindent 4 }}
type: Opaque
stringData:
{{ .Values.cnpg.cluster.secret.usernameKey | default "username" }}: {{ .Values.cnpg.cluster.appUser | default "sure" }}
{{ .Values.cnpg.cluster.secret.passwordKey | default "password" }}: {{ randAlphaNum 24 | quote }}
{{- end }}

View File

@@ -0,0 +1,70 @@
{{- if and .Values.cronjobs.enabled .Values.cronjobs.items }}
{{- range .Values.cronjobs.items }}
apiVersion: batch/v1
kind: CronJob
metadata:
name: {{ include "sure.fullname" $ }}-{{ .name }}
labels:
{{- include "sure.labels" $ | nindent 4 }}
spec:
schedule: {{ .schedule | quote }}
concurrencyPolicy: {{ default "Forbid" .concurrencyPolicy }}
successfulJobsHistoryLimit: {{ default 1 .successfulJobsHistoryLimit }}
failedJobsHistoryLimit: {{ default 3 .failedJobsHistoryLimit }}
jobTemplate:
spec:
{{- if .ttlSecondsAfterFinished }}
ttlSecondsAfterFinished: {{ .ttlSecondsAfterFinished }}
{{- end }}
template:
metadata:
labels:
{{- include "sure.selectorLabels" $ | nindent 12 }}
spec:
restartPolicy: Never
{{- with .nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .affinity }}
affinity:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .tolerations }}
tolerations:
{{- toYaml . | nindent 12 }}
{{- end }}
securityContext:
{{- toYaml $.Values.podSecurityContext | nindent 12 }}
{{- if $.Values.image.imagePullSecrets }}
imagePullSecrets:
{{- toYaml $.Values.image.imagePullSecrets | nindent 12 }}
{{- end }}
containers:
- name: {{ .name }}
image: {{ include "sure.image" $ }}
imagePullPolicy: {{ $.Values.image.pullPolicy }}
command:
{{- toYaml .command | nindent 16 }}
{{- if .args }}
args:
{{- toYaml .args | nindent 16 }}
{{- end }}
env:
{{- include "sure.env" (dict "ctx" $ "includeDatabase" true "includeRedis" true "extraEnv" .env) | nindent 16 }}
{{- if or $.Values.rails.extraEnvFrom .envFrom }}
envFrom:
{{- with $.Values.rails.extraEnvFrom }}
{{- toYaml . | nindent 16 }}
{{- end }}
{{- with .envFrom }}
{{- toYaml . | nindent 16 }}
{{- end }}
{{- end }}
securityContext:
{{- toYaml $.Values.securityContext | nindent 16 }}
resources:
{{- toYaml .resources | nindent 16 }}
---
{{- end }}
{{- end }}

View File

@@ -0,0 +1,22 @@
{{- if .Values.hpa.web.enabled }}
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "sure.fullname" . }}-web
labels:
{{- include "sure.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "sure.fullname" . }}-web
minReplicas: {{ .Values.hpa.web.minReplicas }}
maxReplicas: {{ .Values.hpa.web.maxReplicas }}
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.hpa.web.targetCPUUtilizationPercentage }}
{{- end }}

View File

@@ -0,0 +1,22 @@
{{- if .Values.hpa.worker.enabled }}
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "sure.fullname" . }}-worker
labels:
{{- include "sure.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "sure.fullname" . }}-worker
minReplicas: {{ .Values.hpa.worker.minReplicas }}
maxReplicas: {{ .Values.hpa.worker.maxReplicas }}
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.hpa.worker.targetCPUUtilizationPercentage }}
{{- end }}

View File

@@ -0,0 +1,35 @@
{{- if .Values.ingress.enabled }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ include "sure.fullname" . }}
labels:
{{- include "sure.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.className }}
ingressClassName: {{ .Values.ingress.className }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
pathType: {{ .pathType }}
backend:
service:
name: {{ include "sure.fullname" $ }}
port:
name: http
{{- end }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- toYaml .Values.ingress.tls | nindent 4 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,59 @@
{{- if eq .Values.migrations.strategy "job" }}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "sure.fullname" . }}-migrate
labels:
{{- include "sure.labels" . | nindent 4 }}
annotations:
# Run migrations after all chart resources (including CNPG Cluster/Services) are created
# so the DB can actually come up before we connect and run db:prepare.
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
spec:
backoffLimit: {{ .Values.migrations.job.backoffLimit | default 3 }}
template:
metadata:
labels:
{{- include "sure.selectorLabels" . | nindent 8 }}
spec:
restartPolicy: Never
{{- with .Values.migrations.job.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.migrations.job.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.migrations.job.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
{{- if .Values.image.imagePullSecrets }}
imagePullSecrets:
{{- toYaml .Values.image.imagePullSecrets | nindent 8 }}
{{- end }}
containers:
- name: migrate
image: {{ include "sure.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
{{- toYaml .Values.migrations.job.command | nindent 12 }}
args:
- |
{{ .Values.migrations.job.args | nindent 14 }}
env:
{{- include "sure.env" (dict "ctx" . "includeDatabase" true "includeRedis" false) | nindent 12 }}
{{- if .Values.rails.extraEnvFrom }}
envFrom:
{{- toYaml .Values.rails.extraEnvFrom | nindent 12 }}
{{- end }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
resources:
{{- toYaml .Values.migrations.job.resources | nindent 12 }}
ttlSecondsAfterFinished: {{ .Values.migrations.job.ttlSecondsAfterFinished | default 600 }}
{{- end }}

View File

@@ -0,0 +1,26 @@
{{- if and .Values.redisOperator.enabled .Values.redisOperator.managed.enabled (eq (.Values.redisOperator.mode | default "sentinel") "sentinel") }}
{{- $name := .Values.redisOperator.name | default (printf "%s-redis" (include "sure.fullname" .)) -}}
{{- $imgRepo := .Values.redisOperator.sentinel.image.repository | default "quay.io/opstree/redis-sentinel" -}}
{{- $imgTag := .Values.redisOperator.sentinel.image.tag | default "v7.2.4" -}}
apiVersion: redis.redis.opstreelabs.in/v1beta2
kind: RedisSentinel
metadata:
name: {{ $name | quote }}
labels:
app.kubernetes.io/component: redis
{{- include "sure.labels" . | nindent 4 }}
spec:
clusterSize: {{ .Values.redisOperator.replicas | default 3 }}
redisSentinelConfig:
redisReplicationName: {{ $name | quote }}
masterGroupName: {{ (.Values.redisOperator.sentinel.masterGroupName | default "mymaster") | quote }}
kubernetesConfig:
image: {{ printf "%s:%s" $imgRepo $imgTag | quote }}
redisSecret:
name: {{ include "sure.redisSecretName" . }}
key: {{ include "sure.redisPasswordKey" . }}
{{- with .Values.redisOperator.workloadResources }}
resources:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,19 @@
{{- if and .Values.redisOperator.enabled .Values.redisOperator.managed.enabled (eq (.Values.redisOperator.mode | default "sentinel") "sentinel") }}
{{- $name := .Values.redisOperator.name | default (printf "%s-redis" (include "sure.fullname" .)) -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ printf "%s-sentinel-config" $name }}
labels:
app.kubernetes.io/component: redis
{{- include "sure.labels" . | nindent 4 }}
data:
sentinel.conf: |
port 26379
daemonize no
protected-mode no
sentinel monitor {{ (.Values.redisOperator.sentinel.masterGroupName | default "mymaster") }} {{ printf "%s-master" $name }} 6379 2
sentinel down-after-milliseconds {{ (.Values.redisOperator.sentinel.masterGroupName | default "mymaster") }} 10000
sentinel failover-timeout {{ (.Values.redisOperator.sentinel.masterGroupName | default "mymaster") }} 60000
sentinel parallel-syncs {{ (.Values.redisOperator.sentinel.masterGroupName | default "mymaster") }} 1
{{- end }}

View File

@@ -0,0 +1,79 @@
{{- if .Values.redisSimple.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "sure.fullname" . }}-redis
labels:
app.kubernetes.io/component: redis
{{- include "sure.labels" . | nindent 4 }}
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/component: redis
{{- include "sure.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
app.kubernetes.io/component: redis
{{- include "sure.selectorLabels" . | nindent 8 }}
spec:
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: redis
image: {{ .Values.redisSimple.image.repository }}:{{ .Values.redisSimple.image.tag }}
imagePullPolicy: {{ .Values.redisSimple.image.pullPolicy }}
command: ["sh", "-c"]
args:
- |
{{- if .Values.redisSimple.persistence.enabled }}
exec redis-server --appendonly yes --dir /data{{ if .Values.redisSimple.auth.enabled }} --requirepass "$REDIS_PASSWORD"{{ end }}
{{- else }}
exec redis-server --appendonly no --save ''{{ if .Values.redisSimple.auth.enabled }} --requirepass "$REDIS_PASSWORD"{{ end }}
{{- end }}
{{- if .Values.redisSimple.auth.enabled }}
env:
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: {{ include "sure.redisSecretName" . }}
key: {{ include "sure.redisPasswordKey" . }}
{{- end }}
ports:
- name: redis
containerPort: {{ .Values.redisSimple.service.port | default 6379 }}
protocol: TCP
{{- with .Values.redisSimple.resources }}
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- if .Values.redisSimple.persistence.enabled }}
volumeMounts:
- name: data
mountPath: /data
{{- end }}
{{- if .Values.redisSimple.persistence.enabled }}
volumes:
- name: data
persistentVolumeClaim:
claimName: {{ include "sure.fullname" . }}-redis
{{- end }}
---
{{- if .Values.redisSimple.persistence.enabled }}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ include "sure.fullname" . }}-redis
labels:
{{- include "sure.labels" . | nindent 4 }}
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: {{ .Values.redisSimple.persistence.size | default "1Gi" }}
{{- if .Values.redisSimple.persistence.storageClass }}
storageClassName: {{ .Values.redisSimple.persistence.storageClass }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,19 @@
{{- if .Values.redisSimple.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "sure.fullname" . }}-redis
labels:
app.kubernetes.io/component: redis
{{- include "sure.labels" . | nindent 4 }}
spec:
type: ClusterIP
selector:
app.kubernetes.io/component: redis
{{- include "sure.selectorLabels" . | nindent 4 }}
ports:
- name: redis
port: {{ .Values.redisSimple.service.port | default 6379 }}
targetPort: redis
protocol: TCP
{{- end }}

View File

@@ -0,0 +1,15 @@
{{- if and (not .Values.rails.existingSecret) .Values.rails.secret.enabled }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "sure.fullname" . }}-app
labels:
{{- include "sure.labels" . | nindent 4 }}
type: Opaque
data:
{{- range $k, $v := .Values.rails.secret.values }}
{{- if $v }}
{{ $k }}: {{ $v | toString | b64enc }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "sure.fullname" . }}
labels:
{{- include "sure.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
selector:
app.kubernetes.io/component: web
{{- include "sure.selectorLabels" . | nindent 4 }}
ports:
- name: http
port: {{ .Values.service.port }}
targetPort: http
protocol: TCP

View File

@@ -0,0 +1,18 @@
{{- if .Values.serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "sure.fullname" . }}
labels:
{{- include "sure.labels" . | nindent 4 }}
{{- toYaml .Values.serviceMonitor.additionalLabels | nindent 4 }}
spec:
selector:
matchLabels:
{{- include "sure.selectorLabels" . | nindent 6 }}
endpoints:
- interval: {{ .Values.serviceMonitor.interval }}
scrapeTimeout: {{ .Values.serviceMonitor.scrapeTimeout }}
path: {{ .Values.serviceMonitor.path }}
port: {{ .Values.serviceMonitor.portName }}
{{- end }}

View File

@@ -0,0 +1,63 @@
{{- if and .Values.simplefin.encryption.enabled .Values.simplefin.encryption.backfill.enabled }}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "sure.fullname" . }}-simplefin-backfill
labels:
{{- include "sure.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-weight": "5"
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
spec:
backoffLimit: 1
template:
metadata:
labels:
{{- include "sure.selectorLabels" . | nindent 8 }}
spec:
restartPolicy: Never
{{- with .Values.simplefin.encryption.backfill.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.simplefin.encryption.backfill.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.simplefin.encryption.backfill.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
{{- if .Values.image.imagePullSecrets }}
imagePullSecrets:
{{- toYaml .Values.image.imagePullSecrets | nindent 8 }}
{{- end }}
containers:
- name: simplefin-backfill
image: {{ include "sure.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
{{- toYaml .Values.simplefin.encryption.backfill.command | nindent 12 }}
args:
- {{ .Values.simplefin.encryption.backfill.args | quote }}
env:
{{- include "sure.env" (dict "ctx" . "includeDatabase" true "includeRedis" true) | nindent 12 }}
# Expose dry-run also via ENV for the Rake task convenience
- name: DRY_RUN
value: {{ ternary "true" "false" .Values.simplefin.encryption.backfill.dryRun | quote }}
{{- if .Values.rails.extraEnvFrom }}
envFrom:
{{- toYaml .Values.rails.extraEnvFrom | nindent 12 }}
{{- end }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
resources:
{{- toYaml .Values.simplefin.encryption.backfill.resources | nindent 12 }}
{{- if .Values.rails.extraEnvFrom }}
{{/* envFrom applied at container level above in other templates; CronJob/Job supports it similarly, but here we keep explicit list. If needed, we can extend later. */}}
{{- end }}
ttlSecondsAfterFinished: {{ .Values.simplefin.encryption.backfill.ttlSecondsAfterFinished | default 600 }}
{{- end }}

View File

@@ -0,0 +1,21 @@
apiVersion: v1
kind: Pod
metadata:
name: {{ include "sure.fullname" . }}-test-connection
labels:
{{- include "sure.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test-success
# Auto-clean both successful and failed test Pods to keep the namespace tidy
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed
spec:
restartPolicy: Never
containers:
- name: curl
image: docker.io/library/busybox:1.36
imagePullPolicy: IfNotPresent
command: ["sh", "-c"]
args:
- |
echo "Testing HTTP endpoint on Service {{ include "sure.fullname" . }}";
wget -qO- http://{{ include "sure.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local:{{ .Values.service.port }}/ > /dev/null && echo "OK" || (echo "FAILED" && exit 1)

View File

@@ -0,0 +1,32 @@
{{- $url := include "sure.redisUrl" . -}}
{{- if $url }}
apiVersion: v1
kind: Pod
metadata:
name: {{ include "sure.fullname" . }}-test-redis-auth
labels:
{{- include "sure.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test-success
# Auto-clean both successful and failed test Pods; logs remain in `kubectl logs` history
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed
spec:
restartPolicy: Never
containers:
- name: redis-cli
image: docker.io/redis:7.2-alpine
imagePullPolicy: IfNotPresent
command: ["sh", "-c"]
args:
- |
echo "Pinging Redis at $REDIS_URL";
redis-cli -u "$REDIS_URL" -a "$REDIS_PASSWORD" PING | grep -q PONG && echo OK || (echo FAILED && exit 1)
env:
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: {{ include "sure.redisSecretName" . }}
key: {{ include "sure.redisPasswordKey" . }}
- name: REDIS_URL
value: {{ $url | quote }}
{{- end }}

View File

@@ -0,0 +1,110 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "sure.fullname" . }}-web
labels:
{{- include "sure.labels" . | nindent 4 }}
spec:
revisionHistoryLimit: {{ .Values.web.revisionHistoryLimit | default 3 }}
replicas: {{ .Values.web.replicas }}
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 25%
selector:
matchLabels:
app.kubernetes.io/component: web
{{- include "sure.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
app.kubernetes.io/component: web
{{- include "sure.selectorLabels" . | nindent 8 }}
annotations:
{{- toYaml .Values.web.podAnnotations | nindent 8 }}
spec:
{{- if or (eq .Values.migrations.strategy "initContainer") .Values.migrations.initContainer.enabled }}
initContainers:
- name: db-migrate
image: {{ include "sure.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
{{- toYaml .Values.migrations.initContainer.command | nindent 12 }}
args:
- |
{{ .Values.migrations.initContainer.args | nindent 14 }}
env:
{{- include "sure.env" (dict "ctx" . "includeDatabase" true "includeRedis" true) | nindent 12 }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
resources:
{{- toYaml .Values.migrations.initContainer.resources | nindent 12 }}
{{- end }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
{{- if .Values.image.imagePullSecrets }}
imagePullSecrets:
{{- toYaml .Values.image.imagePullSecrets | nindent 8 }}
{{- end }}
containers:
- name: web
image: {{ include "sure.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.web.command }}
command:
{{- toYaml .Values.web.command | nindent 12 }}
{{- end }}
{{- if .Values.web.args }}
args:
{{- toYaml .Values.web.args | nindent 12 }}
{{- end }}
env:
{{- include "sure.env" (dict "ctx" . "includeDatabase" true "includeRedis" true) | nindent 12 }}
{{- range $k, $v := .Values.web.extraEnv }}
- name: {{ $k }}
value: {{ $v | quote }}
{{- end }}
{{- if or .Values.rails.extraEnvFrom .Values.web.extraEnvFrom }}
envFrom:
{{- with .Values.rails.extraEnvFrom }}
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.web.extraEnvFrom }}
{{- toYaml . | nindent 12 }}
{{- end }}
{{- end }}
ports:
- name: http
containerPort: 3000
protocol: TCP
readinessProbe:
{{- toYaml .Values.web.readinessProbe | nindent 12 }}
livenessProbe:
{{- toYaml .Values.web.livenessProbe | nindent 12 }}
startupProbe:
{{- toYaml .Values.web.startupProbe | nindent 12 }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
resources:
{{- toYaml .Values.web.resources | nindent 12 }}
volumeMounts:
{{- toYaml .Values.web.extraVolumeMounts | nindent 12 }}
{{- if .Values.writableTmp.enabled }}
- name: tmp
mountPath: /tmp
{{- end }}
volumes:
{{- toYaml .Values.web.extraVolumes | nindent 8 }}
{{- if .Values.writableTmp.enabled }}
- name: tmp
emptyDir: {}
{{- end }}
nodeSelector:
{{- toYaml .Values.web.nodeSelector | nindent 8 }}
affinity:
{{- toYaml .Values.web.affinity | nindent 8 }}
tolerations:
{{- toYaml .Values.web.tolerations | nindent 8 }}
topologySpreadConstraints:
{{- toYaml .Values.web.topologySpreadConstraints | nindent 8 }}

View File

@@ -0,0 +1,81 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "sure.fullname" . }}-worker
labels:
{{- include "sure.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.worker.replicas }}
revisionHistoryLimit: 2
selector:
matchLabels:
app.kubernetes.io/component: worker
{{- include "sure.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
app.kubernetes.io/component: worker
{{- include "sure.selectorLabels" . | nindent 8 }}
annotations:
{{- toYaml .Values.worker.podAnnotations | nindent 8 }}
spec:
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
{{- if .Values.image.imagePullSecrets }}
imagePullSecrets:
{{- toYaml .Values.image.imagePullSecrets | nindent 8 }}
{{- end }}
containers:
- name: sidekiq
image: {{ include "sure.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.worker.command }}
command:
{{- toYaml .Values.worker.command | nindent 12 }}
{{- else }}
command: ["bash", "-lc"]
{{- end }}
{{- if .Values.worker.args }}
args:
{{- toYaml .Values.worker.args | nindent 12 }}
{{- else }}
# Default: use Sidekiq's config/sidekiq.yml so all priority queues are polled
args:
- |
bundle exec sidekiq -C config/sidekiq.yml
{{- end }}
env:
{{- include "sure.env" (dict "ctx" . "includeDatabase" true "includeRedis" true "extraEnv" .Values.worker.extraEnv) | nindent 12 }}
{{- if or .Values.rails.extraEnvFrom .Values.worker.extraEnvFrom }}
envFrom:
{{- with .Values.rails.extraEnvFrom }}
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.worker.extraEnvFrom }}
{{- toYaml . | nindent 12 }}
{{- end }}
{{- end }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
resources:
{{- toYaml .Values.worker.resources | nindent 12 }}
volumeMounts:
{{- toYaml .Values.worker.extraVolumeMounts | nindent 12 }}
{{- if .Values.writableTmp.enabled }}
- name: tmp
mountPath: /tmp
{{- end }}
volumes:
{{- toYaml .Values.worker.extraVolumes | nindent 8 }}
{{- if .Values.writableTmp.enabled }}
- name: tmp
emptyDir: {}
{{- end }}
nodeSelector:
{{- toYaml .Values.worker.nodeSelector | nindent 8 }}
affinity:
{{- toYaml .Values.worker.affinity | nindent 8 }}
tolerations:
{{- toYaml .Values.worker.tolerations | nindent 8 }}
topologySpreadConstraints:
{{- toYaml .Values.worker.topologySpreadConstraints | nindent 8 }}

389
charts/sure/values.yaml Normal file
View File

@@ -0,0 +1,389 @@
# Default values for the Sure Helm chart.
# These defaults target a small multi-node self-hosted cluster (for example, k3s with 3 nodes)
# with in-cluster Postgres and Redis managed by operators. For true single-node setups or
# external DB/Redis, see the README "Installation profiles" section and provide an override
# values file.
nameOverride: ""
fullnameOverride: ""
image:
repository: ghcr.io/we-promise/sure
tag: "0.6.5"
pullPolicy: IfNotPresent
# Optional: imagePullSecrets to pull from private registries
imagePullSecrets: []
# Global app configuration
rails:
env: production
# Extra environment variables (non-sensitive). Key/values become env vars.
extraEnv: {}
# Extra environment variables with full EnvVar objects (supports valueFrom/secretKeyRef)
extraEnvVars: []
# Extra envFrom sources applied to all workloads
extraEnvFrom: []
# Control whether encryption env vars are injected into workloads
encryptionEnv:
enabled: true
# Use an existing Secret for sensitive values (recommended). If set, the chart will not create a Secret.
existingSecret: ""
# If not using existingSecret, define sensitive values here (for testing only; do not commit secrets!).
# Prefer managing secrets via external tools like Sealed Secrets or External Secrets.
secret:
enabled: false
values:
SECRET_KEY_BASE: ""
# Active Record encryption keys — required if simplefin.encryption.enabled=true
ACTIVE_RECORD_ENCRYPTION_PRIMARY_KEY: ""
ACTIVE_RECORD_ENCRYPTION_DETERMINISTIC_KEY: ""
ACTIVE_RECORD_ENCRYPTION_KEY_DERIVATION_SALT: ""
# Third party optional keys
OPENAI_ACCESS_TOKEN: ""
OPENAI_URI_BASE: ""
OPENAI_MODEL: ""
OIDC_CLIENT_ID: ""
OIDC_CLIENT_SECRET: ""
OIDC_ISSUER: ""
LANGFUSE_PUBLIC_KEY: ""
LANGFUSE_SECRET_KEY: ""
LANGFUSE_HOST: "https://cloud.langfuse.com"
# Non-secret env defaults mirrored from .env.local.example
settings:
SELF_HOSTED: "true"
ONBOARDING_STATE: "open"
AI_DEBUG_MODE: ""
# Database: CloudNativePG (operator chart dependency) and a Cluster CR (optional)
cnpg:
enabled: true # enable installing the CloudNativePG operator via subchart
cluster:
enabled: true # create a CNPG Cluster custom resource for an in-cluster Postgres
name: "sure-db"
instances: 1 # set to 3+ for HA
storage:
size: 10Gi
storageClassName: ""
# auth config for application user
appUser: sure
appDatabase: sure
# Secret name for DB credentials (auto-created if empty and secret.enabled=true)
existingSecret: ""
secret:
enabled: true
name: ""
usernameKey: username
passwordKey: password
# Optional HA knobs
minSyncReplicas: 0 # set >0 for synchronous replication
maxSyncReplicas: 0
# Optional scheduling for cluster Pods (examples for multi-node k3s; leave empty for single-node)
nodeSelector: {}
affinity: {}
tolerations: []
topologySpreadConstraints: []
# Optional additional cluster configuration values
parameters: {}
# Redis Operator (OT-CONTAINER-KIT) optional dependency and managed Redis CR
redisOperator:
enabled: true # install the operator subchart (standalone-ready defaults)
# Pass-through to the operator subchart (controller) resources if supported by the chart
# Many users run small k3s nodes; keep defaults empty and document examples in README
operator:
resources: {}
name: "" # defaults to <fullname>-redis
# Mode controls how this chart templates Redis CRs for the OT redis-operator.
# - replication: only a RedisReplication CR (recommended/production)
# - sentinel: RedisReplication remains, and an optional RedisSentinel CR can be enabled via sentinel.enabled (advanced)
# - standalone: reserved for future use
mode: replication
sentinel:
# When true AND mode=sentinel, chart will also render a RedisSentinel CR (advanced).
# For most self-hosted production clusters, RedisReplication alone is sufficient; enable Sentinel
# when you specifically want Redis Sentinel-based failover on top of replication.
enabled: false
masterGroupName: "mymaster"
# Dedicated image for RedisSentinel pods. By default this is the OT-CONTAINER-KIT image
# that understands SERVER_MODE/SETUP_MODE=SENTINEL and runs redis-sentinel on port 26379.
image:
repository: quay.io/opstree/redis-sentinel
tag: v8.4.0
replicas: 3
# Image used by the RedisReplication CR (required by operator CRD)
image:
# Use OT-CONTAINER-KIT tuned image by default for best compatibility with the operator.
repository: quay.io/opstree/redis
tag: v8.4.0
# Probes for RedisSentinel (sentinel TCP port)
probes:
sentinel:
port: 26379
initialDelaySeconds: 30
periodSeconds: 10
replication:
port: 6379
initialDelaySeconds: 30
periodSeconds: 10
auth:
existingSecret: "" # default to rails.existingSecret when empty
passwordKey: "redis-password"
persistence:
enabled: false
className: "" # e.g. longhorn
size: 8Gi
# Optional scheduling for RedisReplication Pods (top-level fallback if managed.* not set). Prefer setting under managed.*
nodeSelector: {}
tolerations: []
affinity: {}
topologySpreadConstraints: []
workloadResources: {} # resources for Redis pods created by the CR (data-plane, not operator controller)
# Sure-managed Redis via Operator CR (works with or without installing the subchart if operator is cluster-wide)
managed:
enabled: true # default to in-cluster HA Redis via operator
# Optional scheduling knobs for managed RedisReplication (preferred location)
nodeSelector: {}
tolerations: []
affinity: {}
topologySpreadConstraints: []
workloadResources: {}
persistence:
enabled: false
className: ""
size: 8Gi
# Redis when using external service (redis-ha.enabled=false) — used only for secret key mapping
redis:
# Name of the key inside rails.existingSecret (or rails.secret.values) that contains the Redis password
passwordKey: "redis-password"
# Optional simple Redis (non-HA) that this chart can deploy as a fallback
redisSimple:
enabled: false
image:
repository: docker.io/redis
tag: "7.2"
pullPolicy: IfNotPresent
service:
port: 6379
auth:
enabled: true
# Use an existing Secret for the Redis password
existingSecret: ""
# Key within the Secret that contains the password (used by helpers/REDIS_PASSWORD)
passwordKey: "redis-password"
persistence:
enabled: false
storageClass: ""
size: 1Gi
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
# URLs constructed automatically when using in-cluster DB and Redis.
# You can override DATABASE_URL and REDIS_URL explicitly via rails.extraEnv if using external services.
service:
type: ClusterIP
port: 80
ingress:
enabled: false
className: ""
annotations: {}
hosts:
- host: sure.local
paths:
- path: /
pathType: Prefix
tls: []
# ServiceMonitor for Prometheus Operator (optional). Set scrape path and port.
serviceMonitor:
enabled: false
interval: 30s
scrapeTimeout: 10s
path: /metrics
portName: http
additionalLabels: {}
# Web (Rails server) Deployment configuration
web:
enabled: true
replicas: 1
revisionHistoryLimit: 3
# Optional command/args override
command: []
args: []
resources:
requests:
cpu: 100m
memory: 256Mi
limits: {}
podAnnotations: {}
nodeSelector: {}
tolerations: []
affinity: {}
topologySpreadConstraints: []
extraEnv: {}
extraEnvFrom: []
extraVolumeMounts: []
extraVolumes: []
# Probes
livenessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 20
periodSeconds: 10
timeoutSeconds: 2
failureThreshold: 6
readinessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 2
failureThreshold: 6
startupProbe:
httpGet:
path: /
port: http
failureThreshold: 30
periodSeconds: 5
# Worker (Sidekiq) Deployment configuration
worker:
enabled: true
replicas: 1
queues: "default"
# Optional command/args override for Sidekiq
command: []
args: []
resources:
requests:
cpu: 100m
memory: 256Mi
limits: {}
podAnnotations: {}
nodeSelector: {}
tolerations: []
affinity: {}
topologySpreadConstraints: []
extraEnv: {}
extraEnvFrom: []
extraVolumeMounts: []
extraVolumes: []
# Migrations: how to run database migrations
migrations:
# strategy: job (default) runs a Helm hook pre-install/pre-upgrade Job
# strategy: initContainer runs migrations on web pod start via initContainer instead
strategy: job
job:
backoffLimit: 3
ttlSecondsAfterFinished: 600
nodeSelector: {}
tolerations: []
affinity: {}
resources: {}
# Optional overrides for the migrate job
command: ["bash", "-lc"]
args: |
DB_HOST=$(echo "$DATABASE_URL" | sed 's/.*@//; s/:.*//')
until pg_isready -h "$DB_HOST" -p 5432; do echo "Waiting for DB..."; sleep 5; done
echo "Preparing database (db:prepare)" && \
DISABLE_DATABASE_ENVIRONMENT_CHECK=1 bundle exec rake db:prepare
initContainer:
# Optional additional safety net: when enabled, adds a db-migrate initContainer
# to the web Deployment that only runs migrations if there are pending ones.
# This can be used together with strategy: job for extra protection on pod restarts.
enabled: false
command: ["bash", "-lc"]
args: |
DB_HOST=$(echo "$DATABASE_URL" | sed 's/.*@//; s/:.*//')
until pg_isready -h "$DB_HOST" -p 5432; do echo "Waiting for DB..."; sleep 5; done
if bundle exec rake db:pending_migrations | grep -q "pending"; then
echo "Running db:migrate" && \
DISABLE_DATABASE_ENVIRONMENT_CHECK=1 bundle exec rake db:migrate
else
echo "No pending migrations"
fi
resources: {}
# SimpleFin encryption backfill job (post-install/upgrade)
simplefin:
encryption:
enabled: false
# If enabled, Active Record Encryption keys must be provided via rails.existingSecret or rails.secret.values
backfill:
enabled: true
dryRun: true # default to dry-run for safety; set false to perform writes
ttlSecondsAfterFinished: 600
nodeSelector: {}
tolerations: []
affinity: {}
# Optional overrides for the backfill job
command: ["bash", "-lc"]
args: |
# Inline DB_PASSWORD into DATABASE_URL for this job only (DATABASE_URL comes from the chart)
if [ -n "$DB_PASSWORD" ]; then
export DATABASE_URL="${DATABASE_URL//\$(DB_PASSWORD)/$DB_PASSWORD}"
fi
DB_HOST=$(echo "$DATABASE_URL" | sed 's/.*@//; s/:.*//')
until pg_isready -h "$DB_HOST" -p 5432; do echo "Waiting for DB..."; sleep 5; done
echo "Running SimpleFin encrypt_access_urls backfill (dry_run=$DRY_RUN)" && \
bundle exec rake "sure:simplefin:encrypt_access_urls[dry_run=$DRY_RUN]"
resources: {}
# Optional CronJobs
cronjobs:
enabled: false
items: []
# - name: nightly-backfill
# schedule: "15 2 * * *"
# command: ["bash", "-lc", "bundle exec rake some:task"]
# concurrencyPolicy: Forbid
# successfulJobsHistoryLimit: 1
# failedJobsHistoryLimit: 3
# resources: {}
# Security context defaults
podSecurityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
fsGroupChangePolicy: OnRootMismatch
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: false
capabilities:
drop:
- ALL
# Optional writable /tmp for Rails/Sidekiq when enforcing read-only root FS
writableTmp:
enabled: false
# HorizontalPodAutoscaler templates (disabled by default)
hpa:
web:
enabled: false
minReplicas: 2
maxReplicas: 5
targetCPUUtilizationPercentage: 70
worker:
enabled: false
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 70