
A single VPS can comfortably run a handful of small services—an API, an internal dashboard, a webhook receiver—right up until you want clean hostnames, real TLS, and deployments you can repeat. That’s where an Nginx reverse proxy on a VPS pays off: one public edge on 80/443, multiple backends behind it, correct proxy headers, and a sane way to roll forward (or back) without improvising on a live box.
This walkthrough assumes Ubuntu Server 24.04 LTS and Nginx 1.24+ from the Ubuntu repo (still a common baseline in 2026). You’ll proxy two example apps: a Node.js API on 127.0.0.1:3001 and a Python/FastAPI service on 127.0.0.1:9001. You’ll also wire up WebSocket support, add a simple health endpoint you can monitor, and set up a rollback routine you can execute quickly.
Scenario and architecture (what you’re building)
Public traffic reaches your VPS on ports 80 and 443. Nginx terminates TLS, forces HTTPS, and routes each hostname to a local backend:
api.example.net→ Node.js API (127.0.0.1:3001)ops.example.net→ FastAPI internal tool (127.0.0.1:9001)
Both apps bind only to loopback. Your firewall exposes only 22 (SSH), 80, and 443. Certbot handles certificates and renewals.
Prerequisites
- A VPS with Ubuntu Server 24.04 LTS, a public IPv4/IPv6 address, and SSH access.
- Two DNS records pointing at the VPS (A/AAAA):
api.example.netandops.example.net. - Two services running locally (or use the sample “hello” services below) listening on
127.0.0.1:3001and127.0.0.1:9001. - Root or sudo privileges.
If you want a safe baseline before you expose web ports, skim our UFW firewall setup for a VPS in 2026 first.
Step 1: Prep the VPS (packages, firewall, and sanity checks)
-
Update the system and install Nginx + Certbot:
sudo apt update sudo apt -y upgrade sudo apt -y install nginx certbot python3-certbot-nginxExpected output: Nginx installs and enables a systemd unit.
-
Confirm Nginx is running:
systemctl status nginx --no-pagerExpected output contains:
Active: active (running). -
Open only the required ports with UFW (adjust SSH if you don’t use 22):
sudo ufw allow OpenSSH sudo ufw allow 'Nginx Full' sudo ufw --force enable sudo ufw status verboseExpected output includes rules for
22/tcp,80/tcp, and443/tcp.
Step 2: Stand up two local backends (sample services you can swap later)
If you already have your own services running on those ports, jump to Step 3. Otherwise, these tiny services give you something predictable to proxy while you validate routing and headers.
Option A: Node.js API on 127.0.0.1:3001
-
Install Node.js (Ubuntu repo is fine for a demo; production often standardizes on Node 20/22 via NodeSource or your platform baseline):
sudo apt -y install nodejs npm -
Create a tiny server:
sudo mkdir -p /srv/node-api sudo tee /srv/node-api/server.js >/dev/null <<'EOF' const http = require('http'); const server = http.createServer((req, res) => { if (req.url === '/healthz') { res.writeHead(200, {'Content-Type': 'application/json'}); return res.end(JSON.stringify({status: 'ok', service: 'node-api'})); } res.writeHead(200, {'Content-Type': 'text/plain'}); res.end(`node-api says hi\npath=${req.url}\nxfwd=${req.headers['x-forwarded-for'] || ''}\nproto=${req.headers['x-forwarded-proto'] || ''}\n`); }); server.listen(3001, '127.0.0.1', () => { console.log('node-api listening on 127.0.0.1:3001'); }); EOF -
Create a systemd service:
sudo tee /etc/systemd/system/node-api.service >/dev/null <<'EOF' [Unit] Description=Demo Node API (loopback only) After=network.target [Service] Type=simple WorkingDirectory=/srv/node-api ExecStart=/usr/bin/node /srv/node-api/server.js Restart=on-failure User=www-data Group=www-data [Install] WantedBy=multi-user.target EOF sudo systemctl daemon-reload sudo systemctl enable --now node-api -
Verify locally:
curl -sS http://127.0.0.1:3001/healthzExpected output:
{"status":"ok","service":"node-api"}
Option B: FastAPI service on 127.0.0.1:9001
-
Install Python tooling:
sudo apt -y install python3-venv -
Create the app:
sudo mkdir -p /srv/ops-tool sudo python3 -m venv /srv/ops-tool/.venv sudo /srv/ops-tool/.venv/bin/pip install --upgrade pip sudo /srv/ops-tool/.venv/bin/pip install fastapi uvicorn -
Add a minimal FastAPI server:
sudo tee /srv/ops-tool/app.py >/dev/null <<'EOF' from fastapi import FastAPI, Request app = FastAPI() @app.get('/healthz') async def healthz(): return {"status": "ok", "service": "ops-tool"} @app.get('/') async def root(request: Request): return { "msg": "ops-tool root", "client": request.client.host if request.client else None, "x_forwarded_for": request.headers.get('x-forwarded-for'), "x_forwarded_proto": request.headers.get('x-forwarded-proto'), } EOF -
Run it under systemd:
sudo tee /etc/systemd/system/ops-tool.service >/dev/null <<'EOF' [Unit] Description=Demo Ops Tool (FastAPI, loopback only) After=network.target [Service] Type=simple WorkingDirectory=/srv/ops-tool ExecStart=/srv/ops-tool/.venv/bin/uvicorn app:app --host 127.0.0.1 --port 9001 Restart=on-failure User=www-data Group=www-data [Install] WantedBy=multi-user.target EOF sudo systemctl daemon-reload sudo systemctl enable --now ops-tool -
Verify locally:
curl -sS http://127.0.0.1:9001/healthzExpected output:
{"status":"ok","service":"ops-tool"}
Step 3: Create Nginx upstreams and server blocks (clean, readable layout)
Ubuntu’s Nginx layout is straightforward and worth sticking with:
/etc/nginx/nginx.conf(global)/etc/nginx/sites-available/andsites-enabled/(vhosts)/etc/nginx/snippets/(reusable bits)
You’ll create two snippets (headers + WebSockets), then two site configs.
-
Create a proxy headers snippet:
sudo tee /etc/nginx/snippets/proxy-headers.conf >/dev/null <<'EOF' proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Request-Id $request_id; EOF -
Create a WebSocket snippet:
sudo tee /etc/nginx/snippets/websocket.conf >/dev/null <<'EOF' proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; EOF -
Create the
api.example.netsite:sudo tee /etc/nginx/sites-available/api.example.net >/dev/null <<'EOF' upstream node_api_upstream { server 127.0.0.1:3001; keepalive 32; } server { listen 80; listen [::]:80; server_name api.example.net; location /.well-known/acme-challenge/ { root /var/www/html; } location / { return 301 https://$host$request_uri; } } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name api.example.net; # certbot will inject ssl_certificate directives access_log /var/log/nginx/api.access.log; error_log /var/log/nginx/api.error.log; # Simple upstream health proxy location = /healthz { include /etc/nginx/snippets/proxy-headers.conf; proxy_pass http://node_api_upstream/healthz; } # Example: WebSocket endpoint at /ws (optional) location /ws { include /etc/nginx/snippets/proxy-headers.conf; include /etc/nginx/snippets/websocket.conf; proxy_read_timeout 3600; proxy_send_timeout 3600; proxy_pass http://node_api_upstream; } location / { include /etc/nginx/snippets/proxy-headers.conf; proxy_pass http://node_api_upstream; } } EOF -
Create the
ops.example.netsite (with basic auth as a simple safety belt):sudo apt -y install apache2-utils sudo htpasswd -c /etc/nginx/.htpasswd-ops adminYou’ll be prompted for a password. Put it in your password manager, not a shell history.
sudo tee /etc/nginx/sites-available/ops.example.net >/dev/null <<'EOF' upstream ops_tool_upstream { server 127.0.0.1:9001; keepalive 16; } server { listen 80; listen [::]:80; server_name ops.example.net; location /.well-known/acme-challenge/ { root /var/www/html; } location / { return 301 https://$host$request_uri; } } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name ops.example.net; # certbot will inject ssl_certificate directives access_log /var/log/nginx/ops.access.log; error_log /var/log/nginx/ops.error.log; auth_basic "Restricted"; auth_basic_user_file /etc/nginx/.htpasswd-ops; location = /healthz { include /etc/nginx/snippets/proxy-headers.conf; proxy_pass http://ops_tool_upstream/healthz; } location / { include /etc/nginx/snippets/proxy-headers.conf; proxy_pass http://ops_tool_upstream; } } EOF -
Enable both sites and disable the default:
sudo rm -f /etc/nginx/sites-enabled/default sudo ln -s /etc/nginx/sites-available/api.example.net /etc/nginx/sites-enabled/ sudo ln -s /etc/nginx/sites-available/ops.example.net /etc/nginx/sites-enabled/ -
Test and reload Nginx:
sudo nginx -t sudo systemctl reload nginxExpected output from the test includes:
syntax is ok test is successful
Step 4: Issue TLS certificates with Certbot (and verify renewals)
-
Request certificates for both hostnames (Certbot will edit the SSL lines inside your server blocks):
sudo certbot --nginx -d api.example.net -d ops.example.netExpected result: Certbot obtains certificates, installs them into the Nginx config, and reloads Nginx.
-
Confirm the renewal timer is present:
systemctl list-timers --all | grep -E 'certbot|letsencrypt' || true -
Dry-run renew (safe to run anytime):
sudo certbot renew --dry-runExpected output mentions that simulated renewal succeeded.
Step 5: Verify routing, headers, and auth end-to-end
Run these from your laptop (or any internet-connected machine). Swap in your real domains.
-
API health check over HTTPS:
curl -i https://api.example.net/healthzExpected:
HTTP/2 200(or HTTP/1.1 200) and a JSON body from the Node service. -
Confirm HTTP redirects to HTTPS:
curl -I http://api.example.net/Expected:
301with aLocation: https://api.example.net/...header. -
Ops tool requires basic auth:
curl -I https://ops.example.net/Expected:
401andWWW-Authenticate.curl -u admin -i https://ops.example.net/healthzExpected:
200with{"status":"ok"...}. -
Check that the backends are not reachable publicly (should fail):
curl -i http://YOUR_VPS_IP:3001/healthz || true curl -i http://YOUR_VPS_IP:9001/healthz || trueExpected: connection refused or timeout, because services listen on
127.0.0.1only.
Step 6: Add small production touches (timeouts, limits, and safer defaults)
You don’t need a giant tuning file for two services. You do want a few guardrails: limits that stop obvious abuse, and timeouts that keep clients from hanging forever.
-
Set basic limits and timeouts in a dedicated file:
sudo tee /etc/nginx/conf.d/00-proxy-sane-defaults.conf >/dev/null <<'EOF' client_max_body_size 20m; # Keep proxy behavior predictable for APIs proxy_connect_timeout 5s; proxy_send_timeout 60s; proxy_read_timeout 60s; # Don’t buffer large API responses into disk by accident proxy_buffering off; # Good default for upstream keepalive keepalive_timeout 65; EOF sudo nginx -t sudo systemctl reload nginx -
Quick diagnostic: if disk usage starts creeping up later, it’s usually logs or buffering. This guide walks you through proving the culprit quickly: VPS disk space troubleshooting: find what’s filling your Linux server and fix it safely (2026).
Step 7: Deploy changes safely (with a rollback you can execute under pressure)
Most Nginx outages come from boring mistakes: a missing semicolon, an extra brace, a file saved in the wrong place. Treat your config like something you deploy, not something you “tweak,” and keep a rollback that’s one command away.
-
Before any change, snapshot your active config:
sudo install -d -m 0755 /root/nginx-backups sudo tar -C /etc/nginx -czf /root/nginx-backups/nginx-etc-$(date +%F-%H%M%S).tgz . -
Make your edits (adjust
proxy_read_timeout, add a newlocation, etc.), then validate syntax:sudo nginx -tIf this fails, stop there. Fix it before you reload.
-
Reload without dropping connections:
sudo systemctl reload nginx -
Verify after the reload (hit both vhosts):
curl -fsS https://api.example.net/healthz curl -u admin -fsS https://ops.example.net/healthz -
Rollback (two practical options):
-
Fast rollback for a single file: keep a
.bakcopy before edits.sudo cp /etc/nginx/sites-available/api.example.net /etc/nginx/sites-available/api.example.net.bak -
Full rollback from tarball:
sudo tar -C /etc/nginx -xzf /root/nginx-backups/nginx-etc-YYYY-MM-DD-HHMMSS.tgz sudo nginx -t sudo systemctl reload nginx
-
Common pitfalls (and how to recognize them)
-
WebSockets connect then immediately drop. Symptoms: you never see 101 Switching Protocols. Fix: make sure your WebSocket
locationincludes theUpgrade/Connectionheaders and a longproxy_read_timeout. Use/etc/nginx/snippets/websocket.confeverywhere you proxy WebSockets. -
App generates wrong redirects (http instead of https). Symptoms: login flows bounce to HTTP or the wrong scheme. Fix: send
X-Forwarded-Protoand configure your framework to trust proxy headers (FastAPI/Uvicorn, Express behind proxy, etc.). -
502 Bad Gateway. Usually means the upstream crashed, isn’t listening, or you proxied to the wrong port. Confirm with:
sudo ss -lntp | grep -E ':3001|:9001' sudo journalctl -u node-api -n 50 --no-pager sudo journalctl -u ops-tool -n 50 --no-pager -
Certbot succeeds but HTTPS still serves the default site. Most often you have a
server_namemismatch, or an enabled site is still marked asdefault_server. List enabled sites and search for duplicates:ls -la /etc/nginx/sites-enabled/ grep -R "default_server" -n /etc/nginx | head -
Logs grow faster than expected. Reverse proxies can log every asset and every health check. If disk space starts getting tight, make log rotation explicit and confirm retention. This post is the practical reference: VPS log rotation best practices in 2026.
Operational checks you should automate
Once the routing works, assume something will break later: a bad deploy, an expired credential, a renewal edge case. A couple of small checks catch most failures early.
-
TLS and HTTP status checks: run from an external runner every minute (or plug them into your monitoring stack).
curl -fsS https://api.example.net/healthz >/dev/null curl -u admin:YOURPASS -fsS https://ops.example.net/healthz >/dev/null -
Nginx config check in CI: lint before you ship config. Even a simple pipeline step that runs
nginx -tinside a container catches a lot. -
Centralize logs if you operate more than one VPS: debugging gets faster when you can search across boxes. If that’s your next move, this is a solid pattern: VPS Log Shipping with Vector: Centralize Linux Logs to OpenSearch in 2026.
Next steps (where to take this setup)
-
Add rate limiting for noisy endpoints. Nginx
limit_req_zone+limit_reqworks well for login and token endpoints. -
Use blue/green upstreams for deploys. Define two upstream blocks (A/B), flip traffic by changing one line, reload, and roll back instantly if you see errors.
-
Lock down admin surfaces. Basic auth is a start. VPN-only access is better for internal tools. If you want private access without opening extra ports, see: Tailscale VPS VPN setup: secure admin access to private services without opening ports (2026 guide).
Summary
An Nginx reverse proxy on a VPS gives you one controlled edge for multiple services: TLS termination, consistent headers, WebSocket support, and one place to add guardrails. Keep your config modular (snippets), validate changes with nginx -t, and take a quick tarball backup before edits. You’ll spend less time firefighting and more time shipping.
If you’re doing this for a real API or internal tool, pick a VPS with predictable CPU and NVMe-backed storage. A HostMyCode VPS fits this pattern cleanly. If you’d rather not handle OS updates and service babysitting, consider managed VPS hosting so you can stay focused on the application layer.
If you’re putting multiple services behind one public endpoint, size your VPS with some headroom—TLS handshakes and traffic spikes are rarely polite. HostMyCode offers HostMyCode VPS plans that suit Nginx-based routing, and managed VPS hosting if you want patching, monitoring, and day-to-day ops handled for you.
FAQ
Should my apps bind to 0.0.0.0 or 127.0.0.1 behind Nginx?
Prefer 127.0.0.1 (or a private interface) so the app can’t be reached directly from the internet. Keep Nginx as your controlled entry point.
Do I need HTTP/3 for this setup in 2026?
Not for most small services. Start with HTTP/2 on 443 (as shown). Add HTTP/3 later only if you’ve measured client latency gains and you’re prepared to manage QUIC-specific behavior.
What’s the fastest way to debug a 502 from Nginx?
First, confirm the upstream port is listening (ss -lntp). Then read the specific vhost error log (for example /var/log/nginx/api.error.log). A 502 almost always means the upstream is down or unreachable.
How do I roll back a broken Nginx change without downtime?
Keep a copy of the last known-good config (tarball or .bak files), restore it, run nginx -t, then systemctl reload nginx. Reload is graceful and doesn’t drop established connections.
Can I host more than two apps this way?
Yes. Add another upstream and another server_name block (or route by path), then verify with a new health endpoint. After that, the real constraints are RAM, CPU, and how disciplined you are about logs and deployments.