Deploy the API
One Dockerfile, one migrate-then-boot sequence, and a handful of env vars. Any long-lived-container host will do.
apps/api is a long-running Node process. It needs a host that can run a container, hold a WebSocket open, and — if you're using graphile-worker — keep a worker loop alive. Fly, Railway, Render, Kamal, Cloud Run, or your own VM all fit. Serverless targets (Lambda, Vercel Functions) don't, because of the worker + WS.
The Dockerfile
apps/api/Dockerfile is a multi-stage build. The stages worth knowing by name:
| Stage | Purpose |
|---|---|
pruner | turbo prune @orbit/api --docker — narrows the build context to just what the API needs. |
deps | npm ci with every dependency including devDeps. |
builder | prisma generate + tsc — emits the compiled API. |
prod-deps | npm ci --omit=dev for the final runtime image. |
migrate | One-shot entrypoint: prisma migrate deploy. |
runtime | Default target. node --import tsx src/index.ts under tini. |
Build with the repo root as context — not apps/api — because turbo prune needs to see the workspace topology:
# from the repo rootdocker build -f apps/api/Dockerfile -t orbit-api .docker build -f apps/api/Dockerfile --target migrate -t orbit-api-migrate .Migrate, then boot
Orbit separates migrations from the API process on purpose: running them under the API's startup gives you a race when N instances boot at once, and ties "deploy failed" to "migration still running." Every deploy target gets the same two-step:
- Run the
migrateimage (or the equivalent pre-deploy command) to completion. - Start the
runtimeimage once migrations have succeeded.
docker-compose.yml
services: api-migrate: build: dockerfile: apps/api/Dockerfile target: migrate restart: "no"
api: build: dockerfile: apps/api/Dockerfile depends_on: api-migrate: condition: service_completed_successfully healthcheck: test: ["CMD", "node", "-e", "fetch('http://127.0.0.1:4002/health').then(r=>process.exit(r.ok?0:1))"]Platform recipes
Railway
apps/api/railway.toml ships with the repo. Point a Railway service at that file and you get the right build command, pre-deploy migrate, and start command for free:
apps/api/railway.toml (Prisma track)
[build]watchPatterns = ["package.json", "package-lock.json", "turbo.json","apps/api/**", "packages/shared/**",]buildCommand = "npm ci && npx turbo run prisma:generate --filter=@orbit/api && npx turbo run build --filter=@orbit/api"
[deploy]preDeployCommand = "npm exec --workspace=@orbit/api prisma migrate deploy"startCommand = "npm run start --workspace=@orbit/api"The watch patterns are tight — only rebuild when API or @orbit/shared change. Add packages/ui to the list if your API imports from it; by default it doesn't.
Fly.io
Fly doesn't ship a config file with the repo — write one at fly.toml:
fly.toml (Prisma track)
# fly.toml (run `fly launch --no-deploy` first to scaffold, then edit)app = "orbit-api"primary_region = "ord"
[build]dockerfile = "apps/api/Dockerfile"
[deploy]release_command = "/app/node_modules/.bin/prisma migrate deploy"
[http_service]internal_port = 4002force_https = trueauto_stop_machines = false # keep alive; WS + workerauto_start_machines = truemin_machines_running = 1
[[http_service.checks]]path = "/health"interval = "15s"grace_period = "30s"auto_stop_machines = false is load-bearing: graphile-worker + the WebSocket server both need the process to stay up. Letting Fly scale to zero kills background jobs.
Render
- Create a "Web Service" pointing at
apps/api/Dockerfilewith the repo as build context. - Pre-deploy command:
npm exec --workspace=@orbit/api prisma migrate deploy. - Health check path:
/health. - Disable auto-sleep (paid plan) — same rationale as Fly.
Environment variables
The API's .env.example is the full list. For a smoke-test prod deploy, you need at minimum:
DATABASE_URL— prod Postgres.BETTER_AUTH_SECRET— long random string.API_ORIGIN,WEB_ORIGIN,WWW_ORIGIN— the actual public URLs.RESEND_API_KEY+RESEND_FROM— or your email provider's equivalent.
Everything else gates a feature: billing, OAuth, uploads, jobs. If the env var is missing, the feature degrades to a noop rather than throwing on boot — so a minimal deploy is possible, and adding a feature later is a secret-update away.
WebSockets, sticky sessions, and scaling
The realtime hub is in-process. Two API instances do not share state: a broadcast on instance A never reaches a socket on instance B. Until the hub gets a Redis/NATS-backed implementation, run a single instance, or sticky-route WebSocket connections per workspace.
Sticky routing per workspace is the pragmatic path — every workspace is a natural shard. Hash workspaceSlug → backend at your load balancer (Fly's fly-replay, Cloudflare's custom-hash LB, or a Hono middleware that re-emits the request on the correct node).
Health & observability
GET /health— liveness. Returns200 { ok: true }once the container is up. Used by the docker-compose healthcheck and every deploy-platform's probe.- Structured logs via
evlog— one JSON-lines event per request, withuser,workspace, androuteattached. Ship to Datadog, Axiom, Logtail, wherever.