* feat(den): add daytona-backed docker dev flow * fix(den): allow multiple cloud workers in dev * fix(den): use Daytona snapshots for sandbox runtime Use a prebuilt Daytona snapshot for the dev worker runtime so sandboxes start with openwork and opencode already installed. Pass the snapshot through the local Docker flow and add a helper to build the snapshot image for repeatable setup. * chore(den): lower Daytona snapshot defaults Reduce the default snapshot footprint to 1 CPU and 2GB RAM so local Daytona worker testing fits smaller org limits more easily. * Omar is comfortable Make Daytona-backed cloud workers stable enough to reconnect through a dedicated proxy instead of persisting expiring signed preview URLs. Split the proxy into its own deployable service, share Den schema access through a common package, and fix the web badge so healthy workers show ready. * chore(den-db): add Drizzle package scripts Move the shared schema package toward owning its own migration workflow by adding generate and migrate commands plus a local Drizzle config. * chore: update lockfile Refresh the workspace lockfile so the new den-db Drizzle tooling is captured in pnpm-lock.yaml. * feat(den-worker-proxy): make Vercel deployment-ready Align the proxy service with Vercel's Hono runtime entry pattern and keep a separate Node server entry for Docker/local runs. Also scaffold the Vercel project/env setup and wire Render deploy sync to pass Daytona variables needed for daytona mode. * feat(den-db): add db mode switch for PlanetScale Support DB_MODE=planetscale with Drizzle's PlanetScale serverless driver while keeping mysql2 as the local default. This lets Vercel-hosted services use HTTP database access without changing local development workflows. * refactor(den-db): adopt shared TypeID ids Move the Den TypeID system into a shared utils package and use it across auth, org, worker, and sandbox records so fresh databases get one consistent internal ID format. Wire Better Auth into the same generator and update Den request boundaries to normalize typed ids cleanly. * fix(den): restore docker dev stack after refactor Include the shared utils package in the Den Docker images, expose MySQL to the host for local inspection, and fix the remaining Den build/runtime issues surfaced by the Docker path after the shared package and TypeID changes. * docs(den): document Daytona snapshot setup Add README guidance for building and publishing the prebuilt Daytona runtime snapshot, including the helper script, required env, and how to point Den at the snapshot for local Daytona mode. * refactor(den-db): reset migrations and load env files Replace the old Den SQL migration history with a fresh baseline for the current schema, and let Drizzle commands load database credentials from env files. Default to mysql when DATABASE_URL is present and otherwise use PlanetScale credentials so local Docker and hosted environments can share the same DB package cleanly. * fix(den): prepare manual PlanetScale deploys Update the Render workflow and Docker build path for the shared workspace packages, support PlanetScale credentials in the manual SQL migration runner, and stop auto-running DB migrations on Den startup so schema changes stay manual. * feat(den-v2): add Daytona-first control plane Create a new den-v2 service from the current Daytona-enabled control plane, default it to Daytona provisioning, and add a dedicated Render deployment workflow targeting the new v2 Render service. * feat(den-worker-proxy): redirect root to landing Send root proxy traffic to openworklabs.com so direct visits to the worker proxy domain do not hit worker-resolution errors. --------- Co-authored-by: OmarMcAdam <gh@mcadam.io>
9.6 KiB
Den v2 Service
Control plane for hosted workers. Provides Better Auth, worker CRUD, and provisioning hooks.
Quick start
pnpm install
cp .env.example .env
pnpm dev
Docker dev stack
For a one-command local stack with MySQL + the Den cloud web app, run this from the repo root:
./packaging/docker/den-dev-up.sh
That brings up:
- local MySQL for Den
- the Den control plane on a randomized host port
- the OpenWork Cloud web app on a randomized host port
The script prints the exact URLs and docker compose ... down command to use for cleanup.
Environment
DATABASE_URLMySQL connection URLBETTER_AUTH_SECRET32+ char secretBETTER_AUTH_URLpublic base URL Better Auth uses for OAuth redirects and callbacksDEN_BETTER_AUTH_TRUSTED_ORIGINSoptional comma-separated trusted origins for Better Auth origin validation (defaults toCORS_ORIGINS)GITHUB_CLIENT_IDoptional OAuth app client ID for GitHub sign-inGITHUB_CLIENT_SECREToptional OAuth app client secret for GitHub sign-inGOOGLE_CLIENT_IDoptional OAuth app client ID for Google sign-inGOOGLE_CLIENT_SECREToptional OAuth app client secret for Google sign-inPORTserver port <<<<<<< HEADCORS_ORIGINScomma-separated list of trusted browser origins (used for Better Auth origin validation + Express CORS)PROVISIONER_MODEstub,render, ordaytonaOPENWORK_DAYTONA_ENV_PATHoptional path to a shared.env.daytonafile; when unset, Den searches upwards from the repo for.env.daytonaWORKER_URL_TEMPLATEtemplate string with{workerId}RENDER_API_BASERender API base URL (defaulthttps://api.render.com/v1)RENDER_API_KEYRender API key (required forPROVISIONER_MODE=render)RENDER_OWNER_IDRender workspace owner id (required forPROVISIONER_MODE=render)RENDER_WORKER_REPOrepository URL used to create worker servicesRENDER_WORKER_BRANCHbranch used for worker servicesRENDER_WORKER_ROOT_DIRrenderrootDirfor worker servicesRENDER_WORKER_PLANRender plan for worker servicesRENDER_WORKER_REGIONRender region for worker servicesRENDER_WORKER_OPENWORK_VERSIONopenwork-orchestratornpm version installed in workers; the worker build uses itsopencodeVersionmetadata to bundle a matchingopencodebinary into the Render deployRENDER_WORKER_NAME_PREFIXservice name prefixRENDER_WORKER_PUBLIC_DOMAIN_SUFFIXoptional domain suffix for worker custom URLs (e.g.openwork.studio-><worker-id>.openwork.studio)RENDER_CUSTOM_DOMAIN_READY_TIMEOUT_MSmax time to wait for vanity URL health before falling back to Render URLRENDER_PROVISION_TIMEOUT_MSmax time to wait for deploy to become liveRENDER_HEALTHCHECK_TIMEOUT_MSmax time to wait for worker health checksRENDER_POLL_INTERVAL_MSpolling interval for deploy + health checksVERCEL_API_BASEVercel API base URL (defaulthttps://api.vercel.com)VERCEL_TOKENVercel API token used to upsert worker DNS recordsVERCEL_TEAM_IDoptional Vercel team id for scoped API callsVERCEL_TEAM_SLUGoptional Vercel team slug for scoped API calls (used whenVERCEL_TEAM_IDis unset)VERCEL_DNS_DOMAINVercel-managed DNS zone used for worker records (defaultopenwork.studio)POLAR_FEATURE_GATE_ENABLEDenable cloud-worker paywall (trueorfalse)POLAR_API_BASEPolar API base URL (defaulthttps://api.polar.sh)POLAR_ACCESS_TOKENPolar organization access token (required when paywall enabled)POLAR_PRODUCT_IDPolar product ID used for checkout sessions (required when paywall enabled)POLAR_BENEFIT_IDPolar benefit ID required to unlock cloud workers (required when paywall enabled)POLAR_SUCCESS_URLredirect URL after successful checkout (required when paywall enabled)POLAR_RETURN_URLreturn URL shown in checkout (required when paywall enabled)- Daytona:
DAYTONA_API_KEYAPI key used to create sandboxes and volumesDAYTONA_API_URLDaytona API base URL (defaulthttps://app.daytona.io/api)DAYTONA_TARGEToptional Daytona region/targetDAYTONA_SNAPSHOToptional snapshot name; if omitted Den creates workers fromDAYTONA_SANDBOX_IMAGEDAYTONA_SANDBOX_IMAGEsandbox base image when no snapshot is provided (defaultnode:20-bookworm)DAYTONA_SANDBOX_CPU,DAYTONA_SANDBOX_MEMORY,DAYTONA_SANDBOX_DISKresource sizing when image-backed sandboxes are usedDAYTONA_SANDBOX_AUTO_STOP_INTERVAL,DAYTONA_SANDBOX_AUTO_ARCHIVE_INTERVAL,DAYTONA_SANDBOX_AUTO_DELETE_INTERVALlifecycle controlsDAYTONA_SIGNED_PREVIEW_EXPIRES_SECONDSTTL for the signed OpenWork preview URL returned to Den clients (Daytona currently caps this at 24 hours)DAYTONA_SANDBOX_NAME_PREFIX,DAYTONA_VOLUME_NAME_PREFIXresource naming prefixesDAYTONA_WORKSPACE_MOUNT_PATH,DAYTONA_DATA_MOUNT_PATHvolume mount paths inside the sandboxDAYTONA_RUNTIME_WORKSPACE_PATH,DAYTONA_RUNTIME_DATA_PATH,DAYTONA_SIDECAR_DIRlocal sandbox paths used for the live OpenWork runtime; the mounted Daytona volumes are linked into the runtime workspace undervolumes/DAYTONA_OPENWORK_PORT,DAYTONA_OPENCODE_PORTports used when launchingopenwork serveDAYTONA_OPENWORK_VERSIONoptional npm version to install instead of latestopenwork-orchestratorDAYTONA_CREATE_TIMEOUT_SECONDS,DAYTONA_DELETE_TIMEOUT_SECONDS,DAYTONA_HEALTHCHECK_TIMEOUT_MS,DAYTONA_POLL_INTERVAL_MSprovisioning timeouts
For local Daytona development, place your Daytona API credentials in /_repos/openwork/.env.daytona and Den will pick them up automatically, including from task worktrees.
Building a Daytona snapshot
If you want Daytona workers to start from a prebuilt runtime instead of a generic base image, create a snapshot and point Den at it.
The snapshot builder for this repo lives at:
scripts/create-daytona-openwork-snapshot.shservices/den-worker-runtime/Dockerfile.daytona-snapshot
It builds a Linux image with:
openwork-orchestratoropencode
Prerequisites:
- Docker running locally
- Daytona CLI installed and logged in
- a valid
.env.daytonawith at leastDAYTONA_API_KEY
From the OpenWork repo root:
./scripts/create-daytona-openwork-snapshot.sh
To publish a custom-named snapshot:
./scripts/create-daytona-openwork-snapshot.sh openwork-runtime
Useful optional overrides:
DAYTONA_SNAPSHOT_NAMEDAYTONA_SNAPSHOT_REGIONDAYTONA_SNAPSHOT_CPUDAYTONA_SNAPSHOT_MEMORYDAYTONA_SNAPSHOT_DISKOPENWORK_ORCHESTRATOR_VERSIONOPENCODE_VERSION
After the snapshot is pushed, set it in .env.daytona:
DAYTONA_SNAPSHOT=openwork-runtime
Then start Den in Daytona mode:
DEN_PROVISIONER_MODE=daytona packaging/docker/den-dev-up.sh
If you do not set DAYTONA_SNAPSHOT, Den falls back to DAYTONA_SANDBOX_IMAGE and installs runtime dependencies at sandbox startup.
Auth setup (Better Auth)
Generate Better Auth schema (Drizzle):
npx @better-auth/cli@latest generate --config src/auth.ts --output src/db/better-auth.schema.ts --yes
Apply migrations:
pnpm db:generate
pnpm db:migrate
# or use the SQL migration runner used by Docker
pnpm db:migrate:sql
API
GET /healthGET /demo web app (sign-up + auth + worker launch)GET /v1/meGET /v1/workers(list recent workers for signed-in user/org)POST /v1/workers- Cloud launches return
202quickly with workerstatus=provisioningand continue provisioning asynchronously. - Returns
402 payment_requiredwith Polar checkout URL when paywall is enabled and entitlement is missing. - Existing Polar customers are matched by
external_customer_idfirst, then by email to preserve access for pre-existing paid users.
- Cloud launches return
GET /v1/workers/:id- Includes latest instance metadata when available.
POST /v1/workers/:id/tokensDELETE /v1/workers/:id- Deletes worker records and attempts to tear down the backing cloud runtime when destination is
cloud.
- Deletes worker records and attempts to tear down the backing cloud runtime when destination is
CI deployment (dev == prod)
The workflow .github/workflows/deploy-den.yml updates Render env vars and deploys the service on every push to dev when this service changes.
Required GitHub Actions secrets:
RENDER_API_KEYRENDER_DEN_CONTROL_PLANE_SERVICE_IDRENDER_OWNER_IDDEN_DATABASE_URLDEN_BETTER_AUTH_SECRET
Optional GitHub Actions secrets (enable GitHub social sign-in):
DEN_GITHUB_CLIENT_IDDEN_GITHUB_CLIENT_SECRETDEN_GOOGLE_CLIENT_IDDEN_GOOGLE_CLIENT_SECRET
Optional GitHub Actions variable:
DEN_RENDER_WORKER_PLAN(defaults tostandard)DEN_RENDER_WORKER_OPENWORK_VERSIONpins theopenwork-orchestratornpm version installed in workers; the worker build bundles the matchingopencoderelease asset into the Render imageDEN_CORS_ORIGINS(defaults tohttps://app.openwork.software,https://api.openwork.software,<render-service-url>)DEN_BETTER_AUTH_TRUSTED_ORIGINS(defaults toDEN_CORS_ORIGINS)DEN_RENDER_WORKER_PUBLIC_DOMAIN_SUFFIX(defaults toopenwork.studio)DEN_RENDER_CUSTOM_DOMAIN_READY_TIMEOUT_MS(defaults to240000)DEN_BETTER_AUTH_URL(defaults tohttps://app.openwork.software)DEN_VERCEL_API_BASE(defaults tohttps://api.vercel.com)DEN_VERCEL_TEAM_ID(optional)DEN_VERCEL_TEAM_SLUG(optional, defaults toprologe)DEN_VERCEL_DNS_DOMAIN(defaults toopenwork.studio)DEN_POLAR_FEATURE_GATE_ENABLED(true/false, defaults tofalse)DEN_POLAR_API_BASE(defaults tohttps://api.polar.sh)DEN_POLAR_SUCCESS_URL(defaults tohttps://app.openwork.software)DEN_POLAR_RETURN_URL(defaults toDEN_POLAR_SUCCESS_URL)
Required additional secret when using vanity worker domains:
VERCEL_TOKEN