Back to blog
Blog

Linux VPS blue-green deployment with systemd + Nginx: zero-downtime releases without containers (2026)

Linux VPS blue-green deployment using systemd and Nginx for zero-downtime releases, health checks, rollback, and verification.

By Anurag Singh
Updated on Apr 14, 2026
Category: Blog
Share article
Linux VPS blue-green deployment with systemd + Nginx: zero-downtime releases without containers (2026)

A smooth deploy on a single server tends to fall apart at the worst time: the port is already taken, a migration locks tables, or a restart cuts off live requests. A Linux VPS blue-green deployment sidesteps the “replace in place” problem. You keep a known-good version running, start the next version on a different port, verify it, then switch traffic over.

This walkthrough uses a simple, container-free setup: systemd template services plus an Nginx include file that selects the active upstream. The example is a small SaaS API (FastAPI behind Gunicorn/Uvicorn workers), but the same pattern works for any HTTP service that binds to a port.

What you’ll build (and why this pattern works)

You’ll run two independent app instances on the same VPS:

  • Blue on 127.0.0.1:9011
  • Green on 127.0.0.1:9012

Nginx listens on 443 and proxies to the “active” color using one small include file. Your deploy flow becomes predictable: start the inactive color → check health → run a quick smoke test → flip traffic → keep the old one around briefly for rollback.

If you already run a reverse proxy, it’s worth lining up your baseline first. See Nginx reverse proxy on a VPS (2026), then come back here for the blue/green switch mechanics.

Prerequisites

  • A VPS running Ubuntu 24.04 LTS or Debian 12 (commands below assume Ubuntu 24.04).
  • Root or sudo access.
  • A DNS name pointed at your server (example: api.saltbox.example).
  • Nginx installed (or you’ll install it below).
  • A Python web app that can expose /healthz returning HTTP 200.

Hosting note: this fits best when you want single-node production deploys with tight control. If you’d rather offload OS patching and routine safety checks, use managed VPS hosting. If you prefer to manage everything yourself, a HostMyCode VPS is a straightforward fit.

Step 1 — Create the app user, directories, and a minimal example service

Create a dedicated user and a predictable directory layout. It keeps permissions boring and makes rollbacks less stressful.

sudo useradd --system --create-home --home-dir /srv/saltbox --shell /usr/sbin/nologin saltbox
sudo mkdir -p /srv/saltbox/releases /srv/saltbox/shared
sudo chown -R saltbox:saltbox /srv/saltbox

In a real deployment, each release directory would be a checked-out tag/commit. For something you can run immediately, here’s a tiny FastAPI app with a health endpoint.

sudo -u saltbox bash -lc 'mkdir -p /srv/saltbox/releases/r2026-04-14/app'
sudo -u saltbox bash -lc 'cat > /srv/saltbox/releases/r2026-04-14/app/main.py <<"PY"
from fastapi import FastAPI

app = FastAPI()

@app.get("/healthz")
def healthz():
    return {"status": "ok"}

@app.get("/v")
def version():
    return {"release": "r2026-04-14"}
PY'

Create a per-release virtualenv and install dependencies. In production you should pin and review versions; here we keep it short and explicit.

sudo -u saltbox bash -lc 'python3 -m venv /srv/saltbox/releases/r2026-04-14/venv'
sudo -u saltbox bash -lc '/srv/saltbox/releases/r2026-04-14/venv/bin/pip install --upgrade pip'
sudo -u saltbox bash -lc '/srv/saltbox/releases/r2026-04-14/venv/bin/pip install fastapi==0.115.12 "uvicorn[standard]==0.35.0" gunicorn==23.0.0'

Point a stable symlink at the current release:

sudo -u saltbox ln -sfn /srv/saltbox/releases/r2026-04-14 /srv/saltbox/current

Step 2 — Install Nginx and (optionally) get TLS

sudo apt-get update
sudo apt-get install -y nginx

If you need HTTPS certificates, use your normal process. On many VPS setups, Certbot remains the quickest path:

sudo apt-get install -y certbot python3-certbot-nginx

Then:

sudo certbot --nginx -d api.saltbox.example

If you’re also tightening your baseline, add sane firewall rules while you’re here. See UFW Firewall Setup for a VPS in 2026 or, if you run nftables, VPS firewall logging with nftables.

Step 3 — Create systemd template units for blue and green

Use a @ template so the only difference between blue and green is the instance name.

Create /etc/systemd/system/saltbox-api@.service:

sudo tee /etc/systemd/system/saltbox-api@.service >/dev/null <<'UNIT'
[Unit]
Description=Saltbox API (%i) - blue/green instance
After=network.target

[Service]
Type=simple
User=saltbox
Group=saltbox
WorkingDirectory=/srv/saltbox/current/app

# Map instance name to port using an EnvironmentFile created below
EnvironmentFile=/etc/saltbox/saltbox-api-%i.env

# Keep logs in journald; ship them later if you want
ExecStart=/srv/saltbox/current/venv/bin/gunicorn \
  --workers 2 \
  --worker-class uvicorn.workers.UvicornWorker \
  --bind 127.0.0.1:${PORT} \
  --access-logfile - \
  --error-logfile - \
  main:app

# Clean restarts during switchovers
KillSignal=SIGQUIT
TimeoutStopSec=20
Restart=on-failure
RestartSec=2

# Basic hardening (keep it compatible with typical apps)
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/srv/saltbox

[Install]
WantedBy=multi-user.target
UNIT

Now create the env files that define ports for each color:

sudo mkdir -p /etc/saltbox
sudo tee /etc/saltbox/saltbox-api-blue.env >/dev/null <<'EOF'
PORT=9011
EOF
sudo tee /etc/saltbox/saltbox-api-green.env >/dev/null <<'EOF'
PORT=9012
EOF

Reload systemd and start the blue instance:

sudo systemctl daemon-reload
sudo systemctl enable --now saltbox-api@blue

You should see it listening on 127.0.0.1:9011:

sudo systemctl status saltbox-api@blue --no-pager

Quick local verification:

curl -sS http://127.0.0.1:9011/healthz
curl -sS http://127.0.0.1:9011/v

Expected output:

{"status":"ok"}
{"release":"r2026-04-14"}

If you want stricter self-healing (health checks + watchdog), see Systemd watchdog on a VPS. It pairs well with blue/green, but get the basic rollout working first.

Step 4 — Configure Nginx to proxy to the “active” color

You’ll control the active upstream by editing one include file and reloading Nginx. A reload is graceful: existing connections stay up while workers refresh config.

Create an include directory:

sudo mkdir -p /etc/nginx/saltbox

Create the upstream selector file /etc/nginx/saltbox/api-upstream.conf (start with blue):

sudo tee /etc/nginx/saltbox/api-upstream.conf >/dev/null <<'EOF'
set $saltbox_upstream http://127.0.0.1:9011;
EOF

Create a site config /etc/nginx/sites-available/saltbox-api.conf:

sudo tee /etc/nginx/sites-available/saltbox-api.conf >/dev/null <<'NGINX'
server {
    listen 80;
    server_name api.saltbox.example;

    # If you used Certbot, it may manage redirects itself.
    location / {
        return 301 https://$host$request_uri;
    }
}

server {
    listen 443 ssl http2;
    server_name api.saltbox.example;

    # If Certbot configured SSL blocks, keep its managed paths.
    # ssl_certificate /etc/letsencrypt/live/api.saltbox.example/fullchain.pem;
    # ssl_certificate_key /etc/letsencrypt/live/api.saltbox.example/privkey.pem;

    include /etc/nginx/saltbox/api-upstream.conf;

    # Basic proxy settings for APIs
    location / {
        proxy_pass $saltbox_upstream;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        proxy_connect_timeout 3s;
        proxy_read_timeout 60s;
    }

    location = /nginx-health {
        access_log off;
        return 200 "ok\n";
    }
}
NGINX

Enable the site and test configuration:

sudo ln -sfn /etc/nginx/sites-available/saltbox-api.conf /etc/nginx/sites-enabled/saltbox-api.conf
sudo nginx -t
sudo systemctl reload nginx

Verify through Nginx (replace the domain if you don’t have DNS yet; you can use --resolve):

curl -sS https://api.saltbox.example/healthz
curl -sS https://api.saltbox.example/v

Step 5 — Deploy a new release to the inactive color

This is where the pattern pays off: you don’t disturb the serving process while you stage the next one.

Create a new release directory with a visible change. We’ll tweak the version response so the switch is obvious.

sudo -u saltbox bash -lc 'mkdir -p /srv/saltbox/releases/r2026-04-14b/app'
sudo -u saltbox bash -lc 'cat > /srv/saltbox/releases/r2026-04-14b/app/main.py <<"PY"
from fastapi import FastAPI

app = FastAPI()

@app.get("/healthz")
def healthz():
    return {"status": "ok"}

@app.get("/v")
def version():
    return {"release": "r2026-04-14b", "color_hint": "next"}
PY'

Create its venv (you can speed this up later with caching or a wheelhouse):

sudo -u saltbox bash -lc 'python3 -m venv /srv/saltbox/releases/r2026-04-14b/venv'
sudo -u saltbox bash -lc '/srv/saltbox/releases/r2026-04-14b/venv/bin/pip install --upgrade pip'
sudo -u saltbox bash -lc '/srv/saltbox/releases/r2026-04-14b/venv/bin/pip install fastapi==0.115.12 "uvicorn[standard]==0.35.0" gunicorn==23.0.0'

Switch /srv/saltbox/current to the new release. This does not change the running blue process; it already has the old code loaded. It only affects whichever instance you start or restart next, which is exactly the point.

sudo -u saltbox ln -sfn /srv/saltbox/releases/r2026-04-14b /srv/saltbox/current

Start the inactive color. If blue is live, deploy to green:

sudo systemctl enable --now saltbox-api@green

Step 6 — Health-check the inactive color and run a smoke test

Start with local checks on the loopback port:

curl -sS http://127.0.0.1:9012/healthz
curl -sS http://127.0.0.1:9012/v

Expected output includes the new release:

{"status":"ok"}
{"release":"r2026-04-14b","color_hint":"next"}

If your app needs secrets (DB passwords, API tokens), keep them out of unit files. Use an encrypted workflow instead. This guide fits cleanly with the model here: Linux VPS secrets management with sops + age (2026). Decrypt into /etc/saltbox/ during deploy, then restart only the inactive color.

Step 7 — Flip traffic with a single file change (and reload Nginx)

Once green looks good, point Nginx at it and reload. This is deliberately low drama: Nginx validates config before applying it, and reloads don’t drop established connections.

sudo tee /etc/nginx/saltbox/api-upstream.conf >/dev/null <<'EOF'
set $saltbox_upstream http://127.0.0.1:9012;
EOF

sudo nginx -t
sudo systemctl reload nginx

Verify from outside:

curl -sS https://api.saltbox.example/v

Expected output should now show r2026-04-14b.

Step 8 — Keep blue warm for fast rollback, then retire it cleanly

Don’t kill blue the instant you switch. Leave it up for 10–30 minutes while you watch error rates and latency.

When you’re comfortable, retire the old instance:

sudo systemctl stop saltbox-api@blue
sudo systemctl disable saltbox-api@blue

Old releases can quietly eat disk. Before you start pruning, check what’s actually growing. This guide is a solid checklist: VPS disk space troubleshooting.

Verification checklist (what “done” looks like)

  • systemctl is-active saltbox-api@green returns active.
  • curl -fsS http://127.0.0.1:9012/healthz exits 0.
  • curl -fsS https://api.saltbox.example/healthz exits 0.
  • Nginx config validates: nginx -t shows syntax is ok.
  • Access logs show traffic continuing during the switch (no dip from reload).

Common pitfalls (and how to avoid them)

  • Port collision: if your inactive color fails to start, check ss -ltnp | grep 9012. Something else may already be bound.
  • Symlink confusion: if you update /srv/saltbox/current and restart the wrong color, you can accidentally restart the live instance. Keep a tiny runbook: “start inactive, validate, switch, then stop old.”
  • Nginx caching a stale upstream value: the include + variable approach only updates on reload. If you edit the file but don’t reload, nothing changes.
  • Health endpoint lies: HTTP 200 isn’t enough if you depend on Postgres/Redis. Make /healthz check critical dependencies, with strict timeouts (e.g., 200–500ms) so failures show up quickly.
  • Long-lived connections: WebSockets and streaming endpoints need extra care. Keep the old color running longer and consider draining logic at the app layer.

Rollback plan (fast and boring)

If error rates jump after the switch, rollback is just “point Nginx back to blue, reload.” That’s why you kept blue running.

  1. Point Nginx back to blue:
sudo tee /etc/nginx/saltbox/api-upstream.conf >/dev/null <<'EOF'
set $saltbox_upstream http://127.0.0.1:9011;
EOF
sudo nginx -t && sudo systemctl reload nginx
  1. Verify:
curl -sS https://api.saltbox.example/v
  1. Freeze the situation for diagnosis (optional but useful): stop the broken color so it doesn’t keep crashing and spamming logs.
sudo systemctl stop saltbox-api@green

If you need a broader containment checklist (suspicious traffic, compromised host, credential rotation), keep this bookmarked: VPS Incident Response Checklist (2026).

Operational extras (small changes that pay off)

  • Log retention: set journald limits so a bad deploy loop doesn’t fill your disk. This guide stays practical: VPS log rotation best practices in 2026.
  • Backups before risky releases: blue/green protects uptime, not data. Pair it with verified backups. For a complete runbook, see VPS Backup Strategy 2026.
  • Staging ports: reserve a third port (e.g., 9013) for manual debugging so you don’t contaminate blue/green.

If you run an API or internal tool where safe deploys matter, a VPS is often the cleanest production footprint. Start with a HostMyCode VPS, and move to managed VPS hosting if you want HostMyCode handling core server maintenance while you focus on releases.

FAQ

Do I need containers for blue-green deployment?

No. Containers help with packaging and dependency consistency, but blue/green is about running two versions side-by-side and switching traffic safely. systemd + separate ports + a reverse proxy gets you most of the benefit.

Will Nginx reload drop active connections?

A standard systemctl reload nginx performs a graceful reload. Existing worker processes keep serving active connections while new workers start with the updated config.

How do I handle database migrations with this approach?

Use backward-compatible migrations: deploy an app version that works with both schemas, apply the migration, then deploy the version that depends on the new schema. For destructive changes, plan a maintenance window or add dual-write logic.

What’s the simplest way to prove traffic switched?

Add a /v endpoint (as shown) or return a build ID in a response header. Then curl the public URL before and after the switch.

Next steps

  • Automate the flow with a single deploy script that: uploads release → updates /srv/saltbox/current → starts inactive color → health checks → flips Nginx → retires old.
  • Add request tracing or structured logs so you can compare blue vs green error rates during rollout. If you want a vendor-neutral approach, read VPS Monitoring with OpenTelemetry Collector.
  • If this VPS becomes business-critical, consider sizing up (extra CPU headroom makes running two colors more comfortable). HostMyCode’s managed VPS hosting is a sensible next step for uptime without expanding your on-call load.