mirror of
https://github.com/koala73/worldmonitor.git
synced 2026-04-25 17:14:57 +02:00
feat: self-hosted Docker stack (#1521)
* feat: self-hosted Docker stack with nginx, Redis REST proxy, and seeders
Multi-stage Docker build: esbuild TS handler compilation, vite frontend
build, nginx + Node.js API under supervisord. Upstash-compatible Redis
REST proxy with command allowlist for security. AIS relay WebSocket
sidecar. Seeder wrapper script with auto-sourced env vars from
docker-compose.override.yml. Self-hosting guide with architecture
diagram, API key setup, and troubleshooting.
Security: Redis proxy command allowlist (blocks FLUSHALL/CONFIG/EVAL),
nginx security headers (X-Content-Type-Options, X-Frame-Options,
Referrer-Policy), non-root container user.
* feat(docker): add Docker secrets support for API keys
Entrypoint reads /run/secrets/* files and exports as env vars at
startup. Secrets take priority over environment block values and
stay out of docker inspect / process metadata.
Both methods (env vars and secrets) work simultaneously.
* fix(docker): point supervisord at templated nginx config
The entrypoint runs envsubst on nginx.conf.template and writes
the result to /tmp/nginx.conf (with LOCAL_API_PORT substituted
and listening on port 8080 for non-root). But supervisord was
still launching nginx with /etc/nginx/nginx.conf — the default
Alpine config that listens on port 80, which fails with
"Permission denied" under the non-root appuser.
* fix(docker): remove KEYS from Redis allowlist, fix nginx header inheritance, add LLM vars to seeders
- Remove KEYS from redis-rest-proxy allowlist (O(N) blocking, Redis DoS risk)
- Move security headers into each nginx location block to prevent add_header
inheritance suppression
- Add LLM_API_URL / LLM_API_KEY / LLM_MODEL to run-seeders.sh grep filter
so LLM API keys set in docker-compose.override.yml are forwarded to seed scripts
* fix(docker): add path-based POST to Redis proxy, expand allowlist, add missing seeder secrets
- Add POST /{command}/{args...} handler to redis-rest-proxy so Upstash-style
path POSTs work (setCachedJson uses POST /set/<key>/<value>/EX/<ttl>)
- Expand allowlist: HLEN, LTRIM (seed-military-bases, seed-forecasts),
ZREVRANGE (premium-stock-store), ZRANDMEMBER (seed-military-bases)
- Add ACLED_EMAIL, ACLED_PASSWORD, OPENROUTER_API_KEY, OLLAMA_API_URL,
OLLAMA_MODEL to run-seeders.sh so override keys reach host-run seeders
---------
Co-authored-by: Elie Habib <elie.habib@gmail.com>
This commit is contained in:
19
.dockerignore
Normal file
19
.dockerignore
Normal file
@@ -0,0 +1,19 @@
|
||||
node_modules
|
||||
dist
|
||||
.git
|
||||
.github
|
||||
.windsurf
|
||||
.agent
|
||||
.agents
|
||||
.claude
|
||||
.factory
|
||||
.planning
|
||||
e2e
|
||||
src-tauri/target
|
||||
src-tauri/sidecar/node
|
||||
*.log
|
||||
*.md
|
||||
!README.md
|
||||
docs/internal
|
||||
docs/Docs_To_Review
|
||||
tests
|
||||
15
.env.example
15
.env.example
@@ -7,6 +7,8 @@
|
||||
#
|
||||
# cp .env.example .env.local
|
||||
#
|
||||
# For self-hosted Docker deployments, see SELF_HOSTING.md.
|
||||
# Use docker-compose.override.yml (gitignored) for local secrets.
|
||||
# ============================================
|
||||
|
||||
|
||||
@@ -163,6 +165,19 @@ TELEGRAM_SESSION=
|
||||
# Which curated list bucket to ingest: full | tech | finance
|
||||
TELEGRAM_CHANNEL_SET=full
|
||||
|
||||
# ------ Self-Hosted LLM (Docker — any OpenAI-compatible endpoint) ------
|
||||
|
||||
# Point to your own LLM server (Ollama, vLLM, llama.cpp, etc.)
|
||||
# Used for intelligence assessments in the correlation engine.
|
||||
LLM_API_URL=
|
||||
LLM_API_KEY=
|
||||
LLM_MODEL=
|
||||
|
||||
# Alternative: Ollama-specific URL (used if LLM_API_URL is not set)
|
||||
OLLAMA_API_URL=
|
||||
OLLAMA_MODEL=
|
||||
|
||||
|
||||
# ------ Railway Relay Connection (Vercel → Railway) ------
|
||||
|
||||
# Server-side URL (https://) — used by Vercel edge functions to reach the relay
|
||||
|
||||
3
.gitignore
vendored
3
.gitignore
vendored
@@ -53,6 +53,9 @@ scripts/data/iran-events-latest.json
|
||||
scripts/rebuild-military-bases.mjs
|
||||
.wrangler
|
||||
|
||||
# Build artifacts (generated by esbuild/tsc, not source code)
|
||||
api/data/city-coords.js
|
||||
|
||||
# Runtime artifacts (generated by sidecar/tools, not source code)
|
||||
api-cache.json
|
||||
verbose-mode.json
|
||||
|
||||
72
Dockerfile
Normal file
72
Dockerfile
Normal file
@@ -0,0 +1,72 @@
|
||||
# =============================================================================
|
||||
# World Monitor — Docker Image
|
||||
# =============================================================================
|
||||
# Multi-stage build:
|
||||
# builder — installs deps, compiles TS handlers, builds Vite frontend
|
||||
# final — nginx (static) + node (API) under supervisord
|
||||
# =============================================================================
|
||||
|
||||
# ── Stage 1: Builder ─────────────────────────────────────────────────────────
|
||||
FROM node:22-alpine AS builder
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Install root dependencies (layer-cached until package.json changes)
|
||||
COPY package.json package-lock.json ./
|
||||
RUN npm ci --ignore-scripts
|
||||
|
||||
# Copy full source
|
||||
COPY . .
|
||||
|
||||
# Compile TypeScript API handlers → self-contained ESM bundles
|
||||
# Output is api/**/*.js alongside the source .ts files
|
||||
RUN node docker/build-handlers.mjs
|
||||
|
||||
# Build Vite frontend (outputs to dist/)
|
||||
# Skip blog build — blog-site has its own deps not installed here
|
||||
RUN npx tsc && npx vite build
|
||||
|
||||
# ── Stage 2: Runtime ─────────────────────────────────────────────────────────
|
||||
FROM node:22-alpine AS final
|
||||
|
||||
# nginx + supervisord
|
||||
RUN apk add --no-cache nginx supervisor gettext && \
|
||||
mkdir -p /tmp/nginx-client-body /tmp/nginx-proxy /tmp/nginx-fastcgi \
|
||||
/tmp/nginx-uwsgi /tmp/nginx-scgi /var/log/supervisor && \
|
||||
addgroup -S appgroup && adduser -S appuser -G appgroup
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# API server
|
||||
COPY --from=builder /app/src-tauri/sidecar/local-api-server.mjs ./local-api-server.mjs
|
||||
COPY --from=builder /app/src-tauri/sidecar/package.json ./package.json
|
||||
|
||||
# API handler modules (JS originals + compiled TS bundles)
|
||||
COPY --from=builder /app/api ./api
|
||||
|
||||
# Static data files used by handlers at runtime
|
||||
COPY --from=builder /app/data ./data
|
||||
|
||||
# Built frontend static files
|
||||
COPY --from=builder /app/dist /usr/share/nginx/html
|
||||
|
||||
# Nginx + supervisord configs
|
||||
COPY docker/nginx.conf /etc/nginx/nginx.conf.template
|
||||
COPY docker/supervisord.conf /etc/supervisor/conf.d/worldmonitor.conf
|
||||
COPY docker/entrypoint.sh /app/entrypoint.sh
|
||||
RUN chmod +x /app/entrypoint.sh
|
||||
|
||||
# Ensure writable dirs for non-root
|
||||
RUN chown -R appuser:appgroup /app /tmp/nginx-client-body /tmp/nginx-proxy \
|
||||
/tmp/nginx-fastcgi /tmp/nginx-uwsgi /tmp/nginx-scgi /var/log/supervisor \
|
||||
/var/lib/nginx /var/log/nginx
|
||||
|
||||
USER appuser
|
||||
|
||||
EXPOSE 8080
|
||||
|
||||
# Healthcheck via nginx
|
||||
HEALTHCHECK --interval=30s --timeout=5s --start-period=15s --retries=3 \
|
||||
CMD wget -qO- http://localhost:8080/api/health || exit 1
|
||||
|
||||
CMD ["/app/entrypoint.sh"]
|
||||
27
Dockerfile.relay
Normal file
27
Dockerfile.relay
Normal file
@@ -0,0 +1,27 @@
|
||||
# =============================================================================
|
||||
# AIS Relay Sidecar
|
||||
# =============================================================================
|
||||
# Runs scripts/ais-relay.cjs as a standalone container.
|
||||
# Only dependency beyond Node stdlib is the 'ws' WebSocket library.
|
||||
# Set AISSTREAM_API_KEY in docker-compose.yml.
|
||||
# =============================================================================
|
||||
|
||||
FROM node:22-alpine
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Install only the ws package (everything else is Node stdlib)
|
||||
RUN npm install --omit=dev ws@8.19.0
|
||||
|
||||
# Relay script
|
||||
COPY scripts/ais-relay.cjs ./scripts/ais-relay.cjs
|
||||
|
||||
# Shared helper required by the relay (rss-allowed-domains.cjs)
|
||||
COPY shared/ ./shared/
|
||||
|
||||
EXPOSE 3004
|
||||
|
||||
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
|
||||
CMD wget -qO- http://localhost:3004/health || exit 1
|
||||
|
||||
CMD ["node", "scripts/ais-relay.cjs"]
|
||||
206
SELF_HOSTING.md
Normal file
206
SELF_HOSTING.md
Normal file
@@ -0,0 +1,206 @@
|
||||
# 🌍 Self-Hosting World Monitor
|
||||
|
||||
Run the full World Monitor stack locally with Docker/Podman.
|
||||
|
||||
## 📋 Prerequisites
|
||||
|
||||
- **Docker** or **Podman** (rootless works fine)
|
||||
- **Docker Compose** or **podman-compose** (`pip install podman-compose` or `uvx podman-compose`)
|
||||
- **Node.js 22+** (for running seed scripts on the host)
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
```bash
|
||||
# 1. Clone and enter the repo
|
||||
git clone https://github.com/koala73/worldmonitor.git
|
||||
cd worldmonitor
|
||||
npm install
|
||||
|
||||
# 2. Start the stack
|
||||
docker compose up -d # or: uvx podman-compose up -d
|
||||
|
||||
# 3. Seed data into Redis
|
||||
./scripts/run-seeders.sh
|
||||
|
||||
# 4. Open the dashboard
|
||||
open http://localhost:3000
|
||||
```
|
||||
|
||||
The dashboard works out of the box with public data sources (earthquakes, weather, conflicts, etc.). API keys unlock additional data feeds.
|
||||
|
||||
## 🔑 API Keys
|
||||
|
||||
Create a `docker-compose.override.yml` to inject your keys. This file is **gitignored** — your secrets stay local.
|
||||
|
||||
```yaml
|
||||
services:
|
||||
worldmonitor:
|
||||
environment:
|
||||
# 🤖 LLM — pick one or both (used for intelligence assessments)
|
||||
GROQ_API_KEY: "" # https://console.groq.com (free, 14.4K req/day)
|
||||
OPENROUTER_API_KEY: "" # https://openrouter.ai (free, 50 req/day)
|
||||
|
||||
# 📊 Markets & Economics
|
||||
FINNHUB_API_KEY: "" # https://finnhub.io (free tier)
|
||||
FRED_API_KEY: "" # https://fred.stlouisfed.org/docs/api/api_key.html (free)
|
||||
EIA_API_KEY: "" # https://www.eia.gov/opendata/ (free)
|
||||
|
||||
# ⚔️ Conflict & Unrest
|
||||
ACLED_ACCESS_TOKEN: "" # https://acleddata.com (free for researchers)
|
||||
|
||||
# 🛰️ Earth Observation
|
||||
NASA_FIRMS_API_KEY: "" # https://firms.modaps.eosdis.nasa.gov (free)
|
||||
|
||||
# ✈️ Aviation
|
||||
AVIATIONSTACK_API: "" # https://aviationstack.com (free tier)
|
||||
|
||||
# 🚢 Maritime
|
||||
AISSTREAM_API_KEY: "" # https://aisstream.io (free)
|
||||
|
||||
# 🌐 Internet Outages (paid)
|
||||
CLOUDFLARE_API_TOKEN: "" # https://dash.cloudflare.com (requires Radar access)
|
||||
|
||||
# 🔌 Self-hosted LLM (optional — any OpenAI-compatible endpoint)
|
||||
LLM_API_URL: "" # e.g. http://localhost:11434/v1/chat/completions
|
||||
LLM_API_KEY: ""
|
||||
LLM_MODEL: ""
|
||||
|
||||
ais-relay:
|
||||
environment:
|
||||
AISSTREAM_API_KEY: "" # same key as above — relay needs it too
|
||||
```
|
||||
|
||||
### 💰 Free vs Paid
|
||||
|
||||
| Status | Keys |
|
||||
|--------|------|
|
||||
| 🟢 No key needed | Earthquakes, weather, natural events, UNHCR displacement, prediction markets, stablecoins, crypto, spending, climate anomalies, submarine cables, BIS data, cyber threats |
|
||||
| 🟢 Free signup | GROQ, FRED, EIA, NASA FIRMS, AISSTREAM, Finnhub, AviationStack, ACLED, OpenRouter |
|
||||
| 🟡 Free (limited) | OpenSky (higher rate limits with account) |
|
||||
| 🔴 Paid | Cloudflare Radar (internet outages) |
|
||||
|
||||
## 🌱 Seeding Data
|
||||
|
||||
The seed scripts fetch upstream data and write it to Redis. They run **on the host** (not inside the container) and need the Redis REST proxy to be running.
|
||||
|
||||
```bash
|
||||
# Run all seeders (auto-sources API keys from docker-compose.override.yml)
|
||||
./scripts/run-seeders.sh
|
||||
```
|
||||
|
||||
**⚠️ Important:** Redis data persists across container restarts via the `redis-data` volume, but is lost on `docker compose down -v`. Re-run the seeders if you remove volumes or see stale data.
|
||||
|
||||
To automate, add a cron job:
|
||||
|
||||
```bash
|
||||
# Re-seed every 30 minutes
|
||||
*/30 * * * * cd /path/to/worldmonitor && ./scripts/run-seeders.sh >> /tmp/wm-seeders.log 2>&1
|
||||
```
|
||||
|
||||
### 🔧 Manual seeder invocation
|
||||
|
||||
If you prefer to run seeders individually:
|
||||
|
||||
```bash
|
||||
export UPSTASH_REDIS_REST_URL=http://localhost:8079
|
||||
export UPSTASH_REDIS_REST_TOKEN=wm-local-token
|
||||
node scripts/seed-earthquakes.mjs
|
||||
node scripts/seed-military-flights.mjs
|
||||
# ... etc
|
||||
```
|
||||
|
||||
## 🏗️ Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────┐
|
||||
│ localhost:3000 │
|
||||
│ (nginx) │
|
||||
├──────────────┬──────────────────────────────┤
|
||||
│ Static Files │ /api/* proxy │
|
||||
│ (Vite SPA) │ │ │
|
||||
│ │ Node.js API (:46123) │
|
||||
│ │ 50+ route handlers │
|
||||
│ │ │ │
|
||||
│ │ Redis REST proxy (:8079) │
|
||||
│ │ │ │
|
||||
│ │ Redis (:6379) │
|
||||
└──────────────┴──────────────────────────────┘
|
||||
AIS Relay (WebSocket → AISStream)
|
||||
```
|
||||
|
||||
| Container | Purpose | Port |
|
||||
|-----------|---------|------|
|
||||
| `worldmonitor` | nginx + Node.js API (supervisord) | 3000 → 8080 |
|
||||
| `worldmonitor-redis` | Data store | 6379 (internal) |
|
||||
| `worldmonitor-redis-rest` | Upstash-compatible REST proxy | 8079 |
|
||||
| `worldmonitor-ais-relay` | Live vessel tracking WebSocket | 3004 (internal) |
|
||||
|
||||
## 🔨 Building from Source
|
||||
|
||||
```bash
|
||||
# Frontend only (for development)
|
||||
npx vite build
|
||||
|
||||
# Full Docker image
|
||||
docker build -t worldmonitor:latest -f Dockerfile .
|
||||
|
||||
# Rebuild and restart
|
||||
docker compose down && docker compose up -d
|
||||
./scripts/run-seeders.sh
|
||||
```
|
||||
|
||||
### ⚠️ Build Notes
|
||||
|
||||
- The Docker image uses **Node.js 22 Alpine** for both builder and runtime stages
|
||||
- Blog site build is skipped in Docker (separate dependencies)
|
||||
- The runtime stage needs `gettext` (Alpine package) for `envsubst` in the nginx config
|
||||
- If you hit `npm ci` sync errors in Docker, regenerate the lockfile with the container's npm version:
|
||||
```bash
|
||||
docker run --rm -v "$(pwd)":/app -w /app node:22-alpine npm install --package-lock-only
|
||||
```
|
||||
|
||||
## 🌐 Connecting to External Infrastructure
|
||||
|
||||
### Shared Redis (optional)
|
||||
|
||||
If you run other stacks that share a Redis instance, connect via an external network:
|
||||
|
||||
```yaml
|
||||
# docker-compose.override.yml
|
||||
services:
|
||||
redis:
|
||||
networks:
|
||||
- infra_default
|
||||
|
||||
networks:
|
||||
infra_default:
|
||||
external: true
|
||||
```
|
||||
|
||||
### Self-Hosted LLM
|
||||
|
||||
Any OpenAI-compatible endpoint works (Ollama, vLLM, llama.cpp server, etc.):
|
||||
|
||||
```yaml
|
||||
# docker-compose.override.yml
|
||||
services:
|
||||
worldmonitor:
|
||||
environment:
|
||||
LLM_API_URL: "http://your-host:8000/v1/chat/completions"
|
||||
LLM_API_KEY: "your-key"
|
||||
LLM_MODEL: "your-model-name"
|
||||
extra_hosts:
|
||||
- "your-host:192.168.1.100" # if not DNS-resolvable
|
||||
```
|
||||
|
||||
## 🐛 Troubleshooting
|
||||
|
||||
| Issue | Fix |
|
||||
|-------|-----|
|
||||
| 📡 `0/55 OK` on health check | Seeders haven't run — `./scripts/run-seeders.sh` |
|
||||
| 🔴 nginx won't start | Check `podman logs worldmonitor` — likely missing `gettext` package |
|
||||
| 🔑 Seeders say "Missing UPSTASH_REDIS_REST_URL" | Stack isn't running, or run via `./scripts/run-seeders.sh` (auto-sets env vars) |
|
||||
| 📦 `npm ci` fails in Docker build | Lockfile mismatch — regenerate with `docker run --rm -v $(pwd):/app -w /app node:22-alpine npm install --package-lock-only` |
|
||||
| 🚢 No vessel data | Set `AISSTREAM_API_KEY` in both `worldmonitor` and `ais-relay` services |
|
||||
| 🔥 No wildfire data | Set `NASA_FIRMS_API_KEY` |
|
||||
| 🌐 No outage data | Requires `CLOUDFLARE_API_TOKEN` (paid Radar access) |
|
||||
112
docker-compose.yml
Normal file
112
docker-compose.yml
Normal file
@@ -0,0 +1,112 @@
|
||||
# =============================================================================
|
||||
# World Monitor — Docker / Podman Compose
|
||||
# =============================================================================
|
||||
# Self-contained stack: app + Redis + AIS relay.
|
||||
#
|
||||
# Quick start:
|
||||
# cp .env.example .env # add your API keys
|
||||
# docker compose up -d --build
|
||||
#
|
||||
# The app will be available at http://localhost:3000
|
||||
# =============================================================================
|
||||
|
||||
services:
|
||||
|
||||
worldmonitor:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
image: worldmonitor:latest
|
||||
container_name: worldmonitor
|
||||
ports:
|
||||
- "${WM_PORT:-3000}:8080"
|
||||
environment:
|
||||
UPSTASH_REDIS_REST_URL: "http://redis-rest:80"
|
||||
UPSTASH_REDIS_REST_TOKEN: "${REDIS_TOKEN:-wm-local-token}"
|
||||
LOCAL_API_PORT: "46123"
|
||||
LOCAL_API_MODE: "docker"
|
||||
LOCAL_API_CLOUD_FALLBACK: "false"
|
||||
WS_RELAY_URL: "http://ais-relay:3004"
|
||||
# LLM provider (any OpenAI-compatible endpoint)
|
||||
LLM_API_URL: "${LLM_API_URL:-}"
|
||||
LLM_API_KEY: "${LLM_API_KEY:-}"
|
||||
LLM_MODEL: "${LLM_MODEL:-}"
|
||||
GROQ_API_KEY: "${GROQ_API_KEY:-}"
|
||||
# Data source API keys (optional — features degrade gracefully)
|
||||
AISSTREAM_API_KEY: "${AISSTREAM_API_KEY:-}"
|
||||
FINNHUB_API_KEY: "${FINNHUB_API_KEY:-}"
|
||||
EIA_API_KEY: "${EIA_API_KEY:-}"
|
||||
FRED_API_KEY: "${FRED_API_KEY:-}"
|
||||
ACLED_ACCESS_TOKEN: "${ACLED_ACCESS_TOKEN:-}"
|
||||
NASA_FIRMS_API_KEY: "${NASA_FIRMS_API_KEY:-}"
|
||||
CLOUDFLARE_API_TOKEN: "${CLOUDFLARE_API_TOKEN:-}"
|
||||
AVIATIONSTACK_API: "${AVIATIONSTACK_API:-}"
|
||||
# Docker secrets (recommended for API keys — keeps them out of docker inspect).
|
||||
# Create secrets/ dir with one file per key, then uncomment below.
|
||||
# See SELF_HOSTING.md or docker-compose.override.yml for details.
|
||||
# secrets:
|
||||
# - GROQ_API_KEY
|
||||
# - AISSTREAM_API_KEY
|
||||
# - FINNHUB_API_KEY
|
||||
# - FRED_API_KEY
|
||||
# - NASA_FIRMS_API_KEY
|
||||
# - LLM_API_KEY
|
||||
depends_on:
|
||||
redis-rest:
|
||||
condition: service_started
|
||||
ais-relay:
|
||||
condition: service_started
|
||||
restart: unless-stopped
|
||||
|
||||
ais-relay:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile.relay
|
||||
image: worldmonitor-ais-relay:latest
|
||||
container_name: worldmonitor-ais-relay
|
||||
environment:
|
||||
AISSTREAM_API_KEY: "${AISSTREAM_API_KEY:-}"
|
||||
PORT: "3004"
|
||||
restart: unless-stopped
|
||||
|
||||
redis:
|
||||
image: docker.io/redis:7-alpine
|
||||
container_name: worldmonitor-redis
|
||||
command: redis-server --maxmemory 256mb --maxmemory-policy allkeys-lru
|
||||
volumes:
|
||||
- redis-data:/data
|
||||
restart: unless-stopped
|
||||
|
||||
redis-rest:
|
||||
build:
|
||||
context: docker
|
||||
dockerfile: Dockerfile.redis-rest
|
||||
image: worldmonitor-redis-rest:latest
|
||||
container_name: worldmonitor-redis-rest
|
||||
ports:
|
||||
- "127.0.0.1:8079:80"
|
||||
environment:
|
||||
SRH_TOKEN: "${REDIS_TOKEN:-wm-local-token}"
|
||||
SRH_CONNECTION_STRING: "redis://redis:6379"
|
||||
depends_on:
|
||||
- redis
|
||||
restart: unless-stopped
|
||||
|
||||
# Docker secrets — uncomment and point to your secret files.
|
||||
# Example: echo "gsk_abc123" > secrets/groq_api_key.txt
|
||||
# secrets:
|
||||
# GROQ_API_KEY:
|
||||
# file: ./secrets/groq_api_key.txt
|
||||
# AISSTREAM_API_KEY:
|
||||
# file: ./secrets/aisstream_api_key.txt
|
||||
# FINNHUB_API_KEY:
|
||||
# file: ./secrets/finnhub_api_key.txt
|
||||
# FRED_API_KEY:
|
||||
# file: ./secrets/fred_api_key.txt
|
||||
# NASA_FIRMS_API_KEY:
|
||||
# file: ./secrets/nasa_firms_api_key.txt
|
||||
# LLM_API_KEY:
|
||||
# file: ./secrets/llm_api_key.txt
|
||||
|
||||
volumes:
|
||||
redis-data:
|
||||
6
docker/Dockerfile.redis-rest
Normal file
6
docker/Dockerfile.redis-rest
Normal file
@@ -0,0 +1,6 @@
|
||||
FROM node:22-alpine
|
||||
WORKDIR /app
|
||||
RUN npm init -y && npm install redis@4
|
||||
COPY redis-rest-proxy.mjs .
|
||||
EXPOSE 80
|
||||
CMD ["node", "redis-rest-proxy.mjs"]
|
||||
107
docker/build-handlers.mjs
Normal file
107
docker/build-handlers.mjs
Normal file
@@ -0,0 +1,107 @@
|
||||
/**
|
||||
* Compiles all API handlers into self-contained ESM bundles so the
|
||||
* local-api-server.mjs sidecar can discover and load them without node_modules.
|
||||
*
|
||||
* Two passes:
|
||||
* 1. TypeScript handlers (api/**\/*.ts) → bundled .js at same path
|
||||
* 2. Plain JS handlers (api/*.js root level) → bundled in-place to inline npm deps
|
||||
*
|
||||
* Run: node docker/build-handlers.mjs
|
||||
*/
|
||||
|
||||
import { build } from 'esbuild';
|
||||
import { readdir, stat } from 'node:fs/promises';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
import path from 'node:path';
|
||||
|
||||
const __dirname = path.dirname(fileURLToPath(import.meta.url));
|
||||
const projectRoot = path.resolve(__dirname, '..');
|
||||
const apiRoot = path.join(projectRoot, 'api');
|
||||
|
||||
// ── Pass 1: TypeScript handlers in subdirectories ─────────────────────────
|
||||
async function findTsHandlers(dir) {
|
||||
const entries = await readdir(dir, { withFileTypes: true });
|
||||
const handlers = [];
|
||||
for (const entry of entries) {
|
||||
const fullPath = path.join(dir, entry.name);
|
||||
if (entry.isDirectory()) {
|
||||
handlers.push(...await findTsHandlers(fullPath));
|
||||
} else if (
|
||||
entry.name.endsWith('.ts') &&
|
||||
!entry.name.startsWith('_') &&
|
||||
!entry.name.endsWith('.test.ts') &&
|
||||
!entry.name.endsWith('.d.ts')
|
||||
) {
|
||||
handlers.push(fullPath);
|
||||
}
|
||||
}
|
||||
return handlers;
|
||||
}
|
||||
|
||||
// ── Pass 2: Plain JS handlers at api/ root level ──────────────────────────
|
||||
// NOTE: This pass only re-bundles JS files at the api/ root level (not subdirs).
|
||||
// If TS handlers are ever added at the api/ root (not under api/<domain>/v1/),
|
||||
// they would need to be handled in Pass 1 instead.
|
||||
async function findJsHandlers(dir) {
|
||||
const entries = await readdir(dir, { withFileTypes: true });
|
||||
return entries
|
||||
.filter(e =>
|
||||
e.isFile() &&
|
||||
e.name.endsWith('.js') &&
|
||||
!e.name.startsWith('_') &&
|
||||
!e.name.endsWith('.test.js') &&
|
||||
!e.name.endsWith('.test.mjs')
|
||||
)
|
||||
.map(e => path.join(dir, e.name));
|
||||
}
|
||||
|
||||
async function compileHandlers(handlers, label) {
|
||||
if (handlers.length === 0) {
|
||||
console.log(`${label}: nothing to compile`);
|
||||
return 0;
|
||||
}
|
||||
console.log(`${label}: compiling ${handlers.length} handlers...`);
|
||||
|
||||
const results = await Promise.allSettled(
|
||||
handlers.map(async (entryPoint) => {
|
||||
const outfile = entryPoint.replace(/\.ts$/, '.js');
|
||||
await build({
|
||||
entryPoints: [entryPoint],
|
||||
outfile,
|
||||
bundle: true,
|
||||
format: 'esm',
|
||||
platform: 'node',
|
||||
target: 'node20',
|
||||
treeShaking: true,
|
||||
allowOverwrite: true,
|
||||
loader: { '.ts': 'ts' },
|
||||
});
|
||||
const { size } = await stat(outfile);
|
||||
return { file: path.relative(projectRoot, outfile), size };
|
||||
})
|
||||
);
|
||||
|
||||
let ok = 0, failed = 0;
|
||||
for (const result of results) {
|
||||
if (result.status === 'fulfilled') {
|
||||
const { file, size } = result.value;
|
||||
console.log(` ✓ ${file} (${(size / 1024).toFixed(1)} KB)`);
|
||||
ok++;
|
||||
} else {
|
||||
console.error(` ✗ ${result.reason?.message || result.reason}`);
|
||||
failed++;
|
||||
}
|
||||
}
|
||||
return failed;
|
||||
}
|
||||
|
||||
const tsHandlers = await findTsHandlers(apiRoot);
|
||||
const jsHandlers = await findJsHandlers(apiRoot);
|
||||
|
||||
const tsFailed = await compileHandlers(tsHandlers, 'build-handlers [TS]');
|
||||
// JS handlers bundled AFTER TS so compiled .js outputs don't get re-processed
|
||||
const jsFailed = await compileHandlers(jsHandlers, 'build-handlers [JS]');
|
||||
|
||||
const totalFailed = tsFailed + jsFailed;
|
||||
console.log(`\nbuild-handlers: complete (${totalFailed} failures)`);
|
||||
if (totalFailed > 0) process.exit(1);
|
||||
18
docker/entrypoint.sh
Normal file
18
docker/entrypoint.sh
Normal file
@@ -0,0 +1,18 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
# Docker secrets → env var bridge
|
||||
# Reads /run/secrets/KEYNAME files and exports as env vars.
|
||||
# Secrets take priority over env vars set via docker-compose environment block.
|
||||
if [ -d /run/secrets ]; then
|
||||
for secret_file in /run/secrets/*; do
|
||||
[ -f "$secret_file" ] || continue
|
||||
key=$(basename "$secret_file")
|
||||
value=$(cat "$secret_file" | tr -d '\n')
|
||||
export "$key"="$value"
|
||||
done
|
||||
fi
|
||||
|
||||
export LOCAL_API_PORT="${LOCAL_API_PORT:-46123}"
|
||||
envsubst '$LOCAL_API_PORT' < /etc/nginx/nginx.conf.template > /tmp/nginx.conf
|
||||
exec /usr/bin/supervisord -c /etc/supervisor/conf.d/worldmonitor.conf
|
||||
103
docker/nginx.conf
Normal file
103
docker/nginx.conf
Normal file
@@ -0,0 +1,103 @@
|
||||
worker_processes auto;
|
||||
error_log /dev/stderr warn;
|
||||
pid /tmp/nginx.pid;
|
||||
|
||||
events {
|
||||
worker_connections 1024;
|
||||
}
|
||||
|
||||
http {
|
||||
include /etc/nginx/mime.types;
|
||||
default_type application/octet-stream;
|
||||
|
||||
log_format main '$remote_addr - [$time_local] "$request" $status $body_bytes_sent';
|
||||
access_log /dev/stdout main;
|
||||
|
||||
sendfile on;
|
||||
tcp_nopush on;
|
||||
keepalive_timeout 65;
|
||||
|
||||
# Serve pre-compressed assets (gzip .gz — built by vite brotliPrecompressPlugin)
|
||||
# brotli_static requires ngx_brotli module — not in Alpine nginx, use gzip fallback
|
||||
gzip_static on;
|
||||
gzip on;
|
||||
gzip_comp_level 5;
|
||||
gzip_min_length 1024;
|
||||
gzip_vary on;
|
||||
gzip_types application/json application/javascript text/css text/plain application/xml text/xml image/svg+xml;
|
||||
|
||||
# Temp dirs writable by non-root
|
||||
client_body_temp_path /tmp/nginx-client-body;
|
||||
proxy_temp_path /tmp/nginx-proxy;
|
||||
fastcgi_temp_path /tmp/nginx-fastcgi;
|
||||
uwsgi_temp_path /tmp/nginx-uwsgi;
|
||||
scgi_temp_path /tmp/nginx-scgi;
|
||||
|
||||
server {
|
||||
listen 8080;
|
||||
root /usr/share/nginx/html;
|
||||
index index.html;
|
||||
|
||||
# Static assets — immutable cache
|
||||
location /assets/ {
|
||||
add_header X-Content-Type-Options "nosniff" always;
|
||||
add_header X-Frame-Options "SAMEORIGIN" always;
|
||||
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
|
||||
add_header X-XSS-Protection "1; mode=block" always;
|
||||
add_header Cache-Control "public, max-age=31536000, immutable";
|
||||
try_files $uri =404;
|
||||
}
|
||||
|
||||
location /map-styles/ {
|
||||
add_header X-Content-Type-Options "nosniff" always;
|
||||
add_header X-Frame-Options "SAMEORIGIN" always;
|
||||
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
|
||||
add_header X-XSS-Protection "1; mode=block" always;
|
||||
add_header Cache-Control "public, max-age=31536000, immutable";
|
||||
try_files $uri =404;
|
||||
}
|
||||
|
||||
location /data/ {
|
||||
add_header X-Content-Type-Options "nosniff" always;
|
||||
add_header X-Frame-Options "SAMEORIGIN" always;
|
||||
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
|
||||
add_header X-XSS-Protection "1; mode=block" always;
|
||||
add_header Cache-Control "public, max-age=31536000, immutable";
|
||||
try_files $uri =404;
|
||||
}
|
||||
|
||||
location /textures/ {
|
||||
add_header X-Content-Type-Options "nosniff" always;
|
||||
add_header X-Frame-Options "SAMEORIGIN" always;
|
||||
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
|
||||
add_header X-XSS-Protection "1; mode=block" always;
|
||||
add_header Cache-Control "public, max-age=31536000, immutable";
|
||||
try_files $uri =404;
|
||||
}
|
||||
|
||||
# API proxy → Node.js local-api-server
|
||||
location /api/ {
|
||||
proxy_pass http://127.0.0.1:${LOCAL_API_PORT};
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
# Pass Origin as localhost so api key checks pass for browser-origin requests
|
||||
proxy_set_header Origin http://localhost;
|
||||
proxy_read_timeout 120s;
|
||||
proxy_send_timeout 120s;
|
||||
}
|
||||
|
||||
# SPA fallback — all other routes serve index.html
|
||||
location / {
|
||||
add_header X-Content-Type-Options "nosniff" always;
|
||||
add_header X-Frame-Options "SAMEORIGIN" always;
|
||||
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
|
||||
add_header X-XSS-Protection "1; mode=block" always;
|
||||
add_header Cache-Control "no-cache, no-store, must-revalidate";
|
||||
# Allow nested YouTube iframes to call requestStorageAccess().
|
||||
add_header Permissions-Policy "storage-access=(self \"https://www.youtube.com\" \"https://youtube.com\")";
|
||||
try_files $uri $uri/ /index.html;
|
||||
}
|
||||
}
|
||||
}
|
||||
193
docker/redis-rest-proxy.mjs
Normal file
193
docker/redis-rest-proxy.mjs
Normal file
@@ -0,0 +1,193 @@
|
||||
#!/usr/bin/env node
|
||||
/**
|
||||
* Upstash-compatible Redis REST proxy.
|
||||
* Translates REST URL paths to raw Redis commands via redis npm package.
|
||||
*
|
||||
* Supports:
|
||||
* GET /{command}/{arg1}/{arg2}/... → Redis command
|
||||
* POST / → JSON body ["COMMAND", "arg1", ...]
|
||||
* POST /pipeline → JSON body [["CMD1",...], ["CMD2",...]]
|
||||
* POST /multi-exec → JSON body [["CMD1",...], ["CMD2",...]]
|
||||
*
|
||||
* Env:
|
||||
* REDIS_URL - Redis connection string (default: redis://redis:6379)
|
||||
* SRH_TOKEN - Bearer token for auth (default: none)
|
||||
* PORT - Listen port (default: 80)
|
||||
*/
|
||||
|
||||
import http from 'node:http';
|
||||
import crypto from 'node:crypto';
|
||||
import { createClient } from 'redis';
|
||||
|
||||
const REDIS_URL = process.env.SRH_CONNECTION_STRING || process.env.REDIS_URL || 'redis://redis:6379';
|
||||
const TOKEN = process.env.SRH_TOKEN || '';
|
||||
const PORT = parseInt(process.env.PORT || '80', 10);
|
||||
|
||||
const client = createClient({ url: REDIS_URL });
|
||||
client.on('error', (err) => console.error('Redis error:', err.message));
|
||||
await client.connect();
|
||||
console.log(`Connected to Redis at ${REDIS_URL}`);
|
||||
|
||||
function checkAuth(req) {
|
||||
if (!TOKEN) return true;
|
||||
const auth = req.headers.authorization || '';
|
||||
const prefix = 'Bearer ';
|
||||
if (!auth.startsWith(prefix)) return false;
|
||||
const provided = auth.slice(prefix.length);
|
||||
if (provided.length !== TOKEN.length) return false;
|
||||
return crypto.timingSafeEqual(Buffer.from(provided), Buffer.from(TOKEN));
|
||||
}
|
||||
|
||||
// Command safety: allowlist of expected Redis commands.
|
||||
// Blocks dangerous operations like FLUSHALL, CONFIG SET, EVAL, DEBUG, SLAVEOF.
|
||||
const ALLOWED_COMMANDS = new Set([
|
||||
'GET', 'SET', 'DEL', 'MGET', 'MSET', 'SCAN',
|
||||
'TTL', 'EXPIRE', 'PEXPIRE', 'EXISTS', 'TYPE',
|
||||
'HGET', 'HSET', 'HDEL', 'HGETALL', 'HMGET', 'HMSET', 'HKEYS', 'HVALS', 'HEXISTS', 'HLEN',
|
||||
'LPUSH', 'RPUSH', 'LPOP', 'RPOP', 'LRANGE', 'LLEN', 'LTRIM',
|
||||
'SADD', 'SREM', 'SMEMBERS', 'SISMEMBER', 'SCARD',
|
||||
'ZADD', 'ZREM', 'ZRANGE', 'ZRANGEBYSCORE', 'ZREVRANGE', 'ZSCORE', 'ZCARD', 'ZRANDMEMBER',
|
||||
'GEOADD', 'GEOSEARCH', 'GEOPOS', 'GEODIST',
|
||||
'INCR', 'DECR', 'INCRBY', 'DECRBY',
|
||||
'PING', 'ECHO', 'INFO', 'DBSIZE',
|
||||
'PUBLISH', 'SUBSCRIBE',
|
||||
'SETNX', 'SETEX', 'PSETEX', 'GETSET',
|
||||
'APPEND', 'STRLEN',
|
||||
]);
|
||||
|
||||
async function runCommand(args) {
|
||||
const cmd = args[0].toUpperCase();
|
||||
if (!ALLOWED_COMMANDS.has(cmd)) {
|
||||
throw new Error(`Command not allowed: ${cmd}`);
|
||||
}
|
||||
const cmdArgs = args.slice(1);
|
||||
return client.sendCommand([cmd, ...cmdArgs.map(String)]);
|
||||
}
|
||||
|
||||
const MAX_BODY_BYTES = 1024 * 1024; // 1 MB
|
||||
|
||||
async function readBody(req) {
|
||||
const chunks = [];
|
||||
let totalLength = 0;
|
||||
for await (const chunk of req) {
|
||||
totalLength += chunk.length;
|
||||
if (totalLength > MAX_BODY_BYTES) {
|
||||
req.destroy();
|
||||
throw new Error('Request body too large');
|
||||
}
|
||||
chunks.push(chunk);
|
||||
}
|
||||
return Buffer.concat(chunks).toString();
|
||||
}
|
||||
|
||||
const server = http.createServer(async (req, res) => {
|
||||
res.setHeader('content-type', 'application/json');
|
||||
|
||||
if (!checkAuth(req)) {
|
||||
res.writeHead(401);
|
||||
res.end(JSON.stringify({ error: 'Unauthorized' }));
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
// POST / — single command
|
||||
if (req.method === 'POST' && (req.url === '/' || req.url === '')) {
|
||||
const body = JSON.parse(await readBody(req));
|
||||
const result = await runCommand(body);
|
||||
res.writeHead(200);
|
||||
res.end(JSON.stringify({ result }));
|
||||
return;
|
||||
}
|
||||
|
||||
// POST /pipeline — batch commands
|
||||
if (req.method === 'POST' && req.url === '/pipeline') {
|
||||
const commands = JSON.parse(await readBody(req));
|
||||
const results = [];
|
||||
for (const cmd of commands) {
|
||||
try {
|
||||
const result = await runCommand(cmd);
|
||||
results.push({ result });
|
||||
} catch (err) {
|
||||
results.push({ error: err.message });
|
||||
}
|
||||
}
|
||||
res.writeHead(200);
|
||||
res.end(JSON.stringify(results));
|
||||
return;
|
||||
}
|
||||
|
||||
// POST /multi-exec — transaction
|
||||
if (req.method === 'POST' && req.url === '/multi-exec') {
|
||||
const commands = JSON.parse(await readBody(req));
|
||||
const multi = client.multi();
|
||||
for (const cmd of commands) {
|
||||
const cmdName = cmd[0].toUpperCase();
|
||||
if (!ALLOWED_COMMANDS.has(cmdName)) {
|
||||
res.writeHead(403);
|
||||
res.end(JSON.stringify({ error: `Command not allowed: ${cmdName}` }));
|
||||
return;
|
||||
}
|
||||
multi.sendCommand(cmd.map(String));
|
||||
}
|
||||
const results = await multi.exec();
|
||||
res.writeHead(200);
|
||||
res.end(JSON.stringify(results.map((r) => ({ result: r }))));
|
||||
return;
|
||||
}
|
||||
|
||||
// GET / — welcome
|
||||
if (req.method === 'GET' && (req.url === '/' || req.url === '')) {
|
||||
res.writeHead(200);
|
||||
res.end('"Welcome to Serverless Redis HTTP!"');
|
||||
return;
|
||||
}
|
||||
|
||||
// GET /{command}/{args...} — REST style
|
||||
if (req.method === 'GET') {
|
||||
const pathname = new URL(req.url, 'http://localhost').pathname;
|
||||
const parts = pathname.slice(1).split('/').map(decodeURIComponent);
|
||||
if (parts.length === 0 || !parts[0]) {
|
||||
res.writeHead(400);
|
||||
res.end(JSON.stringify({ error: 'No command specified' }));
|
||||
return;
|
||||
}
|
||||
const result = await runCommand(parts);
|
||||
res.writeHead(200);
|
||||
res.end(JSON.stringify({ result }));
|
||||
return;
|
||||
}
|
||||
|
||||
// POST /{command}/{args...} — Upstash-compatible path-based POST
|
||||
// Used by setCachedJson(): POST /set/<key>/<value>/EX/<ttl>
|
||||
if (req.method === 'POST') {
|
||||
const pathname = new URL(req.url, 'http://localhost').pathname;
|
||||
const parts = pathname.slice(1).split('/').map(decodeURIComponent);
|
||||
if (parts.length === 0 || !parts[0]) {
|
||||
res.writeHead(400);
|
||||
res.end(JSON.stringify({ error: 'No command specified' }));
|
||||
return;
|
||||
}
|
||||
const result = await runCommand(parts);
|
||||
res.writeHead(200);
|
||||
res.end(JSON.stringify({ result }));
|
||||
return;
|
||||
}
|
||||
|
||||
// OPTIONS
|
||||
if (req.method === 'OPTIONS') {
|
||||
res.writeHead(204);
|
||||
res.end();
|
||||
return;
|
||||
}
|
||||
|
||||
res.writeHead(404);
|
||||
res.end(JSON.stringify({ error: 'Not found' }));
|
||||
} catch (err) {
|
||||
res.writeHead(500);
|
||||
res.end(JSON.stringify({ error: err.message }));
|
||||
}
|
||||
});
|
||||
|
||||
server.listen(PORT, '0.0.0.0', () => {
|
||||
console.log(`Redis REST proxy listening on 0.0.0.0:${PORT}`);
|
||||
});
|
||||
24
docker/supervisord.conf
Normal file
24
docker/supervisord.conf
Normal file
@@ -0,0 +1,24 @@
|
||||
[supervisord]
|
||||
nodaemon=true
|
||||
logfile=/dev/null
|
||||
logfile_maxbytes=0
|
||||
pidfile=/tmp/supervisord.pid
|
||||
|
||||
[program:nginx]
|
||||
command=/usr/sbin/nginx -c /tmp/nginx.conf -g "daemon off;"
|
||||
autostart=true
|
||||
autorestart=true
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
|
||||
[program:worldmonitor-api]
|
||||
command=node /app/local-api-server.mjs
|
||||
directory=/app
|
||||
autostart=true
|
||||
autorestart=true
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
52
scripts/run-seeders.sh
Executable file
52
scripts/run-seeders.sh
Executable file
@@ -0,0 +1,52 @@
|
||||
#!/bin/sh
|
||||
# Run all seed scripts against the local Redis REST proxy.
|
||||
# Usage: ./scripts/run-seeders.sh
|
||||
#
|
||||
# Requires the worldmonitor stack to be running (uvx podman-compose up -d).
|
||||
# The Redis REST proxy listens on localhost:8079 by default.
|
||||
|
||||
UPSTASH_REDIS_REST_URL="${UPSTASH_REDIS_REST_URL:-http://localhost:8079}"
|
||||
UPSTASH_REDIS_REST_TOKEN="${UPSTASH_REDIS_REST_TOKEN:-wm-local-token}"
|
||||
export UPSTASH_REDIS_REST_URL UPSTASH_REDIS_REST_TOKEN
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
PROJECT_DIR="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
|
||||
# Source API keys from docker-compose.override.yml if present.
|
||||
# These keys are configured for the container but seeders run on the host.
|
||||
OVERRIDE="$PROJECT_DIR/docker-compose.override.yml"
|
||||
if [ -f "$OVERRIDE" ]; then
|
||||
_env_tmp=$(mktemp)
|
||||
grep -E '^\s+[A-Z_]+:' "$OVERRIDE" \
|
||||
| grep -v '#' \
|
||||
| sed 's/^\s*//' \
|
||||
| sed 's/: */=/' \
|
||||
| sed "s/[\"']//g" \
|
||||
| grep -E '^(NASA_FIRMS|GROQ|AISSTREAM|FRED|FINNHUB|EIA|ACLED_ACCESS_TOKEN|ACLED_EMAIL|ACLED_PASSWORD|CLOUDFLARE|AVIATIONSTACK|OPENROUTER_API_KEY|LLM_API_URL|LLM_API_KEY|LLM_MODEL|OLLAMA_API_URL|OLLAMA_MODEL)' \
|
||||
| sed 's/^/export /' > "$_env_tmp"
|
||||
. "$_env_tmp"
|
||||
rm -f "$_env_tmp"
|
||||
fi
|
||||
ok=0 fail=0 skip=0
|
||||
|
||||
for f in "$SCRIPT_DIR"/seed-*.mjs; do
|
||||
name="$(basename "$f")"
|
||||
printf "→ %s ... " "$name"
|
||||
output=$(node "$f" 2>&1)
|
||||
rc=$?
|
||||
last=$(echo "$output" | tail -1)
|
||||
|
||||
if echo "$last" | grep -qi "skip\|not set\|missing.*key\|not found"; then
|
||||
printf "SKIP (%s)\n" "$last"
|
||||
skip=$((skip + 1))
|
||||
elif [ $rc -eq 0 ]; then
|
||||
printf "OK\n"
|
||||
ok=$((ok + 1))
|
||||
else
|
||||
printf "FAIL (%s)\n" "$last"
|
||||
fail=$((fail + 1))
|
||||
fi
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo "Done: $ok ok, $skip skipped, $fail failed"
|
||||
Reference in New Issue
Block a user