Back to blog
Blog

Linux VPS ephemeral preview environments with Docker Compose: per-PR stacks on one server

Build Linux VPS ephemeral preview environments per PR using Docker Compose, Traefik routing, and safe cleanup in 2026.

By Anurag Singh
Updated on Apr 14, 2026
Category: Blog
Share article
Linux VPS ephemeral preview environments with Docker Compose: per-PR stacks on one server

Preview environments remove the guesswork. Every pull request gets a real URL, its own database, and a teardown that leaves no residue. You can build that on Kubernetes, but you don’t need a cluster to get the workflow. In 2026, one well-sized VPS can run many Linux VPS ephemeral preview environments with Docker Compose, a reverse proxy, and a bit of automation.

This guide shows one practical setup: a shared Traefik edge proxy, per-PR Compose stacks, predictable naming, and cleanup scripts that stay safely away from production. The example is an API + worker + Postgres, but the same pattern fits most web apps.

Scenario and architecture (what you’re building)

You’ll keep one “edge” stack running all the time (Traefik + ACME certificates). Each pull request spins up its own Compose project containing:

  • An api container (example: port 3000 internally)
  • A worker container (no inbound ports)
  • A dedicated postgres container with its own volume
  • A unique hostname like pr-184.api.preview.example.com

Traefik routes by hostname to the right preview stack using Docker labels. Nothing binds to host ports except Traefik.

Prerequisites

  • A VPS running Ubuntu 24.04 LTS or Debian 12 (examples use Ubuntu 24.04). 2 vCPU / 4 GB RAM is a sensible starting point.
  • Root or sudo access; SSH keys set up.
  • A domain you control, with a wildcard DNS record for preview subdomains.
  • Docker Engine and the Compose plugin installed.
  • A CI system that can SSH into your VPS (GitHub Actions, GitLab CI, Drone, etc.).

If you haven’t hardened SSH yet, read SSH bastion host setup with ProxyJump, MFA, and audit logs before you let CI touch the server.

Step 1: Provision a VPS and create a deploy user

Preview stacks are easiest to run when CPU and disk performance stay consistent. A HostMyCode VPS works well here because you control the OS, networking, and Docker versions without running a whole cluster.

  1. Create a deploy user with limited privileges:

    sudo adduser deploy
    sudo usermod -aG sudo deploy
  2. Allow Docker without sudo (optional, but convenient for CI):

    sudo usermod -aG docker deploy
  3. Log out/in so group membership applies, then verify:

    id deploy
    # expected: groups include docker and sudo

Step 2: Install Docker Engine + Compose plugin (pinned versions)

Use Docker’s official repo so you can stay on a known-good release. The commands below install Docker Engine 27.x (current stable line in 2026) plus the Compose v2 plugin.

  1. sudo apt-get update
    sudo apt-get install -y ca-certificates curl gnupg
  2. sudo install -m 0755 -d /etc/apt/keyrings
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
    sudo chmod a+r /etc/apt/keyrings/docker.gpg
  3. echo \
      "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
      $(. /etc/os-release && echo $VERSION_CODENAME) stable" | \
      sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    
    sudo apt-get update
    sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
  4. Verify versions and that Docker is running:

    docker --version
    docker compose version
    sudo systemctl status docker --no-pager

    Expected output includes Docker 27.x and Compose v2.x, and active (running).

Step 3: Create DNS records for preview routing

At your DNS provider create:

  • A record: preview.example.com → your VPS IP
  • CNAME wildcard: *.preview.example.compreview.example.com

That’s what allows hostnames like pr-184.preview.example.com without adding a DNS record per PR.

If you need a quick DNS primer, HostMyCode’s domains panel is usually the simplest place to manage these records when your domain lives with the same provider.

Step 4: Deploy the shared Traefik edge stack (TLS + routing)

Traefik runs as a permanent stack on ports 80/443, issues Let’s Encrypt certificates, and forwards traffic to the matching preview containers.

  1. Create directories:

    sudo mkdir -p /opt/edge-traefik/{data,logs}
    sudo touch /opt/edge-traefik/data/acme.json
    sudo chmod 600 /opt/edge-traefik/data/acme.json
  2. Create a dedicated Docker network shared by Traefik and all preview stacks:

    docker network create traefik-public

    If it already exists, Docker returns a non-fatal error; that’s fine.

  3. Create /opt/edge-traefik/docker-compose.yml:

    version: "3.9"
    services:
      traefik:
        image: traefik:v3.2
        command:
          - "--api.dashboard=false"
          - "--log.level=INFO"
          - "--accesslog=true"
          - "--accesslog.filepath=/logs/access.log"
          - "--providers.docker=true"
          - "--providers.docker.exposedbydefault=false"
          - "--entrypoints.web.address=:80"
          - "--entrypoints.websecure.address=:443"
          - "--certificatesresolvers.le.acme.email=ops@example.com"
          - "--certificatesresolvers.le.acme.storage=/data/acme.json"
          - "--certificatesresolvers.le.acme.httpchallenge.entrypoint=web"
        ports:
          - "80:80"
          - "443:443"
        volumes:
          - "/var/run/docker.sock:/var/run/docker.sock:ro"
          - "./data:/data"
          - "./logs:/logs"
        networks:
          - traefik-public
        restart: unless-stopped
    
    networks:
      traefik-public:
        external: true
  4. Start it:

    cd /opt/edge-traefik
    docker compose up -d
    docker compose ps

Verification: open http://preview.example.com. You’ll probably get a 404 because you haven’t defined any routes yet. That’s fine. What matters is that Traefik is running and ports 80/443 are reachable.

Step 5: Create the preview app Compose template (per PR)

Each PR gets its own directory and its own Compose project name. That project name is the isolation boundary: Docker uses it to namespace containers, networks, and volumes.

  1. Create a template folder:

    sudo mkdir -p /opt/preview-template
  2. Create /opt/preview-template/docker-compose.yml (replace the image names with your own):

    version: "3.9"
    
    services:
      api:
        image: ghcr.io/acme/payments-api:${APP_TAG}
        environment:
          - NODE_ENV=production
          - DATABASE_URL=postgresql://preview:${DB_PASSWORD}@postgres:5432/previewdb
          - PREVIEW_ID=${PREVIEW_ID}
        depends_on:
          - postgres
        networks:
          - traefik-public
          - preview-internal
        labels:
          - "traefik.enable=true"
          - "traefik.http.routers.${ROUTER_NAME}.rule=Host(`${PREVIEW_HOST}`)"
          - "traefik.http.routers.${ROUTER_NAME}.entrypoints=websecure"
          - "traefik.http.routers.${ROUTER_NAME}.tls=true"
          - "traefik.http.routers.${ROUTER_NAME}.tls.certresolver=le"
          - "traefik.http.services.${SERVICE_NAME}.loadbalancer.server.port=3000"
    
      worker:
        image: ghcr.io/acme/payments-worker:${APP_TAG}
        environment:
          - NODE_ENV=production
          - DATABASE_URL=postgresql://preview:${DB_PASSWORD}@postgres:5432/previewdb
          - PREVIEW_ID=${PREVIEW_ID}
        depends_on:
          - postgres
        networks:
          - preview-internal
    
      postgres:
        image: postgres:17
        environment:
          - POSTGRES_DB=previewdb
          - POSTGRES_USER=preview
          - POSTGRES_PASSWORD=${DB_PASSWORD}
        volumes:
          - pgdata:/var/lib/postgresql/data
        networks:
          - preview-internal
        healthcheck:
          test: ["CMD-SHELL", "pg_isready -U preview -d previewdb"]
          interval: 5s
          timeout: 3s
          retries: 20
    
    volumes:
      pgdata:
    
    networks:
      traefik-public:
        external: true
      preview-internal:
        driver: bridge

Notes on the structure:

  • Only api joins traefik-public. Postgres stays private.
  • No host ports are published for the preview stack, which avoids collisions.
  • Traefik labels are parameterized so each PR gets its own router/service names.

If you’re still managing secrets with plaintext env files, fix that before you scale this up. See Linux VPS secrets management with sops + age for a practical approach.

Step 6: Write the deploy script (create, update, verify)

This script deploys one preview for a PR number and an image tag. It uses predictable paths and names so you can clean up safely later.

  1. Create /usr/local/bin/deploy-preview:

    sudo tee /usr/local/bin/deploy-preview > /dev/null <<'EOF'
    #!/usr/bin/env bash
    set -euo pipefail
    
    if [[ $# -ne 2 ]]; then
      echo "Usage: deploy-preview <pr_number> <app_tag>" 1>&2
      exit 2
    fi
    
    PR="$1"
    APP_TAG="$2"
    
    BASE_DIR="/opt/previews"
    TEMPLATE_DIR="/opt/preview-template"
    PREVIEW_ID="pr-${PR}"
    STACK_DIR="${BASE_DIR}/${PREVIEW_ID}"
    
    PREVIEW_HOST="${PREVIEW_ID}.preview.example.com"
    ROUTER_NAME="${PREVIEW_ID}-router"
    SERVICE_NAME="${PREVIEW_ID}-svc"
    
    # Create a stable, per-preview password (good enough for ephemeral DBs).
    # You can replace this with sops/age or a CI-provided secret.
    DB_PASSWORD_FILE="${STACK_DIR}/.db_password"
    
    mkdir -p "${STACK_DIR}"
    
    if [[ ! -f "${DB_PASSWORD_FILE}" ]]; then
      umask 077
      tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 > "${DB_PASSWORD_FILE}"
    fi
    DB_PASSWORD="$(cat "${DB_PASSWORD_FILE}")"
    
    # Sync template into the stack directory.
    rsync -a --delete "${TEMPLATE_DIR}/" "${STACK_DIR}/"
    
    cat > "${STACK_DIR}/.env" <<ENV
    APP_TAG=${APP_TAG}
    PREVIEW_ID=${PREVIEW_ID}
    PREVIEW_HOST=${PREVIEW_HOST}
    ROUTER_NAME=${ROUTER_NAME}
    SERVICE_NAME=${SERVICE_NAME}
    DB_PASSWORD=${DB_PASSWORD}
    ENV
    
    cd "${STACK_DIR}"
    
    # Use project name to isolate containers/volumes.
    PROJECT="${PREVIEW_ID}"
    
    docker compose -p "${PROJECT}" pull
    docker compose -p "${PROJECT}" up -d
    
    echo "Deployed ${PREVIEW_ID} => https://${PREVIEW_HOST}"
    
    echo "Waiting for Postgres health..."
    for i in $(seq 1 60); do
      status="$(docker inspect --format='{{json .State.Health.Status}}' "${PROJECT}-postgres-1" 2>/dev/null || true)"
      if [[ "${status}" == '"healthy"' ]]; then
        echo "Postgres is healthy."
        break
      fi
      sleep 2
      if [[ $i -eq 60 ]]; then
        echo "Postgres did not become healthy. Check logs:" 1>&2
        docker compose -p "${PROJECT}" logs --no-color postgres 1>&2
        exit 1
      fi
    done
    
    # Lightweight HTTP verification (expects /healthz to return 200).
    # If your app differs, adjust the path.
    echo "Verifying HTTPS route..."
    code="$(curl -sk -o /dev/null -w '%{http_code}' "https://${PREVIEW_HOST}/healthz" || true)"
    if [[ "${code}" != "200" ]]; then
      echo "Health check failed with HTTP ${code}. Recent API logs:" 1>&2
      docker compose -p "${PROJECT}" logs --no-color --tail=80 api 1>&2
      exit 1
    fi
    
    echo "OK: ${PREVIEW_HOST} is serving traffic."
    EOF
    
    sudo chmod +x /usr/local/bin/deploy-preview
  2. Create the previews base directory:

    sudo mkdir -p /opt/previews
    sudo chown -R deploy:deploy /opt/previews /opt/preview-template

Verification (manual run):

sudo -iu deploy deploy-preview 184 sha-7c91b2a
# expected: "Deployed pr-184 => https://pr-184.preview.example.com" then "OK"

Step 7: Lock down inbound network access (only 80/443/SSH)

Preview stacks pile up fast; stray open ports are an easy mistake. If you already run nftables, stick with it. If not, UFW is fine on a single VPS.

Example with UFW:

sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable
sudo ufw status verbose

If you want audit-friendly firewall rules and rate limits, see VPS firewall logging with nftables.

Step 8: Add a cleanup script (safe teardown + optional data wipe)

Cleanup needs real guardrails. The simplest rule is also the safest: only delete stacks under /opt/previews/pr-*, and only bring down Compose projects with that exact pr-### name.

  1. Create /usr/local/bin/destroy-preview:

    sudo tee /usr/local/bin/destroy-preview > /dev/null <<'EOF'
    #!/usr/bin/env bash
    set -euo pipefail
    
    if [[ $# -lt 1 || $# -gt 2 ]]; then
      echo "Usage: destroy-preview <pr_number> [--wipe-volumes]" 1>&2
      exit 2
    fi
    
    PR="$1"
    WIPE="${2:-}"
    
    PREVIEW_ID="pr-${PR}"
    BASE_DIR="/opt/previews"
    STACK_DIR="${BASE_DIR}/${PREVIEW_ID}"
    PROJECT="${PREVIEW_ID}"
    
    if [[ "${PREVIEW_ID}" != pr-* ]]; then
      echo "Refusing: preview id must start with pr-" 1>&2
      exit 3
    fi
    
    if [[ ! -d "${STACK_DIR}" ]]; then
      echo "Nothing to do: ${STACK_DIR} not found"
      exit 0
    fi
    
    cd "${STACK_DIR}"
    
    if [[ "${WIPE}" == "--wipe-volumes" ]]; then
      docker compose -p "${PROJECT}" down -v --remove-orphans
    else
      docker compose -p "${PROJECT}" down --remove-orphans
    fi
    
    # Remove files last, after successful 'down'.
    rm -rf "${STACK_DIR}"
    
    echo "Destroyed ${PREVIEW_ID} (wipe volumes: ${WIPE:-no})"
    EOF
    
    sudo chmod +x /usr/local/bin/destroy-preview

Manual verification:

sudo -iu deploy destroy-preview 184 --wipe-volumes
# expected: "Destroyed pr-184"

Step 9: Wire it into CI (GitHub Actions example)

The CI job builds and pushes images, then deploys over SSH. On the server side, you only need SSH access for the deploy user.

Create a deploy key scoped to deploy (use a dedicated key pair for CI). Store it as VPS_SSH_KEY, and store the host/IP as VPS_HOST.

Example workflow snippet (.github/workflows/preview.yml):

name: Preview
on:
  pull_request:
    types: [opened, synchronize, reopened, closed]

jobs:
  deploy:
    if: github.event.action != 'closed'
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      # Build + push images here (omitted): tag should be unique per commit

      - name: Deploy preview
        env:
          VPS_HOST: ${{ secrets.VPS_HOST }}
          VPS_SSH_KEY: ${{ secrets.VPS_SSH_KEY }}
        run: |
          PR=${{ github.event.number }}
          TAG=sha-${{ github.sha }}
          mkdir -p ~/.ssh
          echo "$VPS_SSH_KEY" > ~/.ssh/id_ed25519
          chmod 600 ~/.ssh/id_ed25519
          ssh -o StrictHostKeyChecking=accept-new deploy@${VPS_HOST} "deploy-preview ${PR} ${TAG}"

  cleanup:
    if: github.event.action == 'closed'
    runs-on: ubuntu-latest
    steps:
      - name: Destroy preview
        env:
          VPS_HOST: ${{ secrets.VPS_HOST }}
          VPS_SSH_KEY: ${{ secrets.VPS_SSH_KEY }}
        run: |
          PR=${{ github.event.number }}
          mkdir -p ~/.ssh
          echo "$VPS_SSH_KEY" > ~/.ssh/id_ed25519
          chmod 600 ~/.ssh/id_ed25519
          ssh -o StrictHostKeyChecking=accept-new deploy@${VPS_HOST} "destroy-preview ${PR} --wipe-volumes"

Expected behavior:

  • Opening/updating a PR deploys or updates the same preview stack.
  • Closing the PR destroys the stack and deletes its Postgres volume.

Step 10: Operational checks (resource caps, logs, and self-healing)

Preview stacks usually fail for unglamorous reasons: the disk fills with images, memory spikes under test loads, or a migration crashes on loop.

  • Cap resources per container so one noisy PR can’t starve the whole box:

    # Example additions under a service in docker-compose.yml
    deploy:
      resources:
        limits:
          cpus: "1.0"
          memory: 768M

    Note: Compose resource limits are enforced when using the Docker Compose V2 plugin; behavior differs across runtimes. Test on your server.

  • Watch disk usage. Image layers add up quickly:

    docker system df
    docker image prune -af --filter "until=168h"

    If disk pressure has burned you before, pair this with VPS disk space troubleshooting.

  • Restart what should restart. Traefik already uses unless-stopped; your app containers should as well:

    restart: unless-stopped

    If you prefer system-level supervision, systemd watchdog on a VPS shows a clean pattern.

Common pitfalls (and how to avoid them)

  • Wildcard DNS missing or proxied incorrectly. If your DNS provider has a “proxy” mode, disable it for the wildcard record until ACME succeeds. Symptom: TLS never issues, and Traefik logs show challenge failures.

  • Two stacks share the same router/service name. Traefik labels must be unique per preview. That’s why the script sets ROUTER_NAME and SERVICE_NAME using the PR number.

  • Publishing host ports in the preview stack. If you add ports: to api, PRs will collide. Let Traefik handle routing instead.

  • Database data persists across code updates unexpectedly. That’s normal: the Postgres volume stays for the life of the PR. If you want a fresh DB on each deploy, add a reset step in CI, or run destroy-preview --wipe-volumes and redeploy.

  • CI SSH key has too much power. Don’t reuse a personal key. Use a dedicated key restricted to the deploy user, and consider limiting allowed commands in ~deploy/.ssh/authorized_keys once the workflow settles.

Rollback plan (if a deployment breaks)

Rollback is straightforward: redeploy the same PR using an older image tag.

  1. Find a previous tag (for example, the last successful commit SHA tag).

  2. Redeploy with the old tag:

    sudo -iu deploy deploy-preview 184 sha-1a2b3c4
  3. If the schema changed and you need a clean slate, wipe the volume and redeploy:

    sudo -iu deploy destroy-preview 184 --wipe-volumes
    sudo -iu deploy deploy-preview 184 sha-1a2b3c4

Next steps (make it production-grade for a team)

  • Switch DB_PASSWORD to real secrets management. The per-PR password file keeps the tutorial simple. For a team, store encrypted secrets and decrypt during deployment.

  • Add automatic TTL cleanup. A nightly job that removes previews older than 14 days keeps disk usage predictable.

  • Centralize logs and traces. If you’re debugging flaky PRs, ship logs with a collector instead of SSHing into the server. The pattern in VPS monitoring with OpenTelemetry Collector works well for preview fleets.

  • Separate “edge” from “previews” when you grow. If previews get noisy, put Traefik and CI runners on one VPS and the previews on another.

Summary

You don’t need a full orchestration platform to get reliable preview URLs. One VPS can host many isolated stacks if you centralize routing, keep naming deterministic, and make teardown scripts conservative. The core pieces are a shared reverse-proxy network, per-PR Compose project names, basic health verification, and cleanup that never wanders outside /opt/previews.

If you expect lots of concurrent pull requests and want consistent performance for your Linux VPS ephemeral preview environments, start with a managed VPS hosting plan so patching, baseline hardening, and routine ops don’t steal time from releases. You still keep full control over Docker and your deployment scripts.

If you’re building preview stacks for a team, choose a VPS that stays stable under load and still gives you full Linux control. A HostMyCode VPS is a straightforward place to run Traefik + per-PR Docker Compose deployments, and managed VPS hosting is a good fit if you don’t want to babysit updates and baseline security.

FAQ

How many preview environments can one VPS handle?

It depends on your app and database load, but a 2 vCPU / 4 GB VPS commonly handles 5–15 light PR stacks if you cap memory per service and prune images weekly. Measure, then size up.

Do I need a wildcard TLS certificate?

No. Traefik can issue individual certificates per hostname using the HTTP-01 challenge. Wildcard certs are optional and usually require DNS-01 automation.

How do I prevent previews from calling production APIs?

Set explicit preview-only environment variables (API base URLs, keys) and enforce allowlists in your upstream services. Treat preview as untrusted until you’ve locked it down.

Can I run this without exposing SSH to the internet?

Yes. Put the VPS behind a private admin network and connect CI via a VPN or a bastion host. Tailscale is a common choice; see Tailscale VPS VPN setup.

What’s the safest way to clean up old previews automatically?

Store a deployment timestamp (for example in /opt/previews/pr-123/.created_at) and have a nightly script delete only directories matching pr-* older than your TTL, calling destroy-preview instead of running docker commands directly.