
Caddy is one of the few web servers that treats HTTPS as the default, not a side quest. A Caddy reverse proxy on a VPS can terminate TLS automatically, route traffic to multiple local services, and keep the edge config readable enough that you’ll still trust it months from now.
This is a hands-on runbook for developers and sysadmins who already host an app on a Linux VPS and want a cleaner, safer front door. The working example is simple: a small JSON API and an internal admin UI on the same box, with automatic certificates, sensible security headers, and logs that won’t quietly eat your disk.
Scenario and architecture (what you’re building)
Caddy will listen on ports 80 and 443 and handle the “edge” responsibilities. Specifically, it will:
- Obtain and renew TLS certificates automatically via Let’s Encrypt.
- Reverse proxy to two local services:
api.yourdomain.example→127.0.0.1:9001(a JSON API)admin.yourdomain.example→127.0.0.1:9002(an internal admin UI)
- Serve a tiny health endpoint at
https://yourdomain.example/healthzwithout touching your app. - Write access logs to a dedicated file with a rotation-friendly format.
- Add strict-but-not-break-everything headers you can verify in seconds.
If you’re coming from Nginx, you can migrate one hostname at a time and keep rollback boring.
Prerequisites (quick checklist)
- A VPS running Debian 13, Ubuntu 24.04 LTS, AlmaLinux 10, or Rocky Linux 10.
- Root or sudo access.
- A domain name with two DNS records pointing to your VPS public IP:
api.yourdomain.example(A/AAAA)admin.yourdomain.example(A/AAAA)
- Two services listening locally (or placeholders you can simulate):
127.0.0.1:9001127.0.0.1:9002
- Open inbound ports
80and443on your firewall/security group.
If you need a firewall baseline first, use your existing rules or follow the approach from UFW firewall setup for a VPS in 2026 and then allow HTTP/HTTPS explicitly.
Why Caddy at the edge (and what to watch for)
The big win is operational: ACME certificate automation is built in, and it behaves like a first-class feature. You don’t babysit cron jobs, hook scripts, or renewals that fail silently until someone notices a browser warning. Routing rules and standard headers also stay compact, which matters when you’re debugging under pressure.
The trade-offs are straightforward:
- You’re relying on Caddy’s ACME automation. It’s reliable, but it still depends on correct DNS and open ports 80/443.
- If you’ve accumulated complex Nginx behavior (maps, njs, custom caching), plan to translate it gradually.
- HTTP/3 (QUIC) is available and useful, but enable it on purpose and test with the clients you actually support.
For most VPS deployments in 2026, the payoff is fewer moving parts and faster recovery when you rebuild.
Set up a VPS that won’t fight you later
Start by updating the host and getting to a known-good baseline. On Debian/Ubuntu:
sudo apt update
sudo apt -y upgrade
sudo reboot
On AlmaLinux/Rocky:
sudo dnf -y update
sudo reboot
If this is a new server, begin with a clean image and keep the edge layer uncomplicated. A HostMyCode VPS fits well because you control the OS, can size for real traffic, and scale resources later without reworking your deployment.
Install Caddy (Debian/Ubuntu and RHEL-family)
Stick to upstream packages so updates remain predictable. Use the official repo instead of downloading random binaries.
Debian 13 / Ubuntu 24.04
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update
sudo apt install -y caddy
You should see a successful install and an enabled systemd unit. Confirm it’s running:
systemctl status caddy --no-pager
● caddy.service - Caddy
Loaded: loaded (/lib/systemd/system/caddy.service; enabled; preset: enabled)
Active: active (running)
AlmaLinux 10 / Rocky Linux 10
sudo dnf install -y 'dnf-command(copr)' curl
sudo dnf copr enable -y @caddy/caddy
sudo dnf install -y caddy
sudo systemctl enable --now caddy
systemctl status caddy --no-pager
If COPR is off-limits in your environment, use Caddy’s official repo method for RHEL-based systems. The goal doesn’t change: package-managed updates and a clean systemd unit.
Build a realistic test backend (so you can verify routing)
If you already have services on 9001 and 9002, skip ahead. Otherwise, run two tiny HTTP servers so you can validate proxying end-to-end before involving your real app.
Create a minimal “API” service using Python on port 9001:
sudo mkdir -p /opt/demo-api
cat | sudo tee /opt/demo-api/app.py >/dev/null <<'PY'
from http.server import BaseHTTPRequestHandler, HTTPServer
import json
class H(BaseHTTPRequestHandler):
def do_GET(self):
if self.path == '/v1/ping':
body = json.dumps({'ok': True, 'service': 'api', 'port': 9001}).encode()
self.send_response(200)
self.send_header('Content-Type', 'application/json')
self.send_header('Content-Length', str(len(body)))
self.end_headers()
self.wfile.write(body)
else:
self.send_response(404)
self.end_headers()
HTTPServer(('127.0.0.1', 9001), H).serve_forever()
PY
sudo apt install -y python3 || true
sudo dnf install -y python3 || true
Run it with systemd so it behaves like a real service (restarts, logs, and all):
cat | sudo tee /etc/systemd/system/demo-api.service >/dev/null <<'UNIT'
[Unit]
Description=Demo API on 127.0.0.1:9001
After=network.target
[Service]
ExecStart=/usr/bin/python3 /opt/demo-api/app.py
Restart=always
RestartSec=2
User=nobody
Group=nogroup
[Install]
WantedBy=multi-user.target
UNIT
sudo systemctl daemon-reload
sudo systemctl enable --now demo-api
sudo systemctl status demo-api --no-pager
Expected local test:
curl -sS http://127.0.0.1:9001/v1/ping | jq .
{
"ok": true,
"service": "api",
"port": 9001
}
Now create an “admin UI” placeholder on port 9002 using busybox (simple and dependable):
sudo mkdir -p /opt/demo-admin
echo '<h1>Admin UI</h1><p>If you can see this, proxying works.</p>' | sudo tee /opt/demo-admin/index.html >/dev/null
# Install busybox if needed
sudo apt install -y busybox || true
sudo dnf install -y busybox || true
cat | sudo tee /etc/systemd/system/demo-admin.service >/dev/null <<'UNIT'
[Unit]
Description=Demo Admin UI on 127.0.0.1:9002
After=network.target
[Service]
WorkingDirectory=/opt/demo-admin
ExecStart=/usr/bin/busybox httpd -f -p 127.0.0.1:9002
Restart=always
RestartSec=2
[Install]
WantedBy=multi-user.target
UNIT
sudo systemctl daemon-reload
sudo systemctl enable --now demo-admin
curl -sS http://127.0.0.1:9002/ | head
Configure Caddyfile for two hostnames + health endpoint
The default config lives at /etc/caddy/Caddyfile. Back it up so you can revert quickly:
sudo cp -a /etc/caddy/Caddyfile /etc/caddy/Caddyfile.bak.$(date +%F-%H%M)
Now write the config. Replace the domains and use an email you actually monitor (ACME notices are useful):
cat | sudo tee /etc/caddy/Caddyfile >/dev/null <<'CF'
{
# Email for ACME account (certificate issuance/expiry notices)
email admin@yourdomain.example
# Log format is JSON by default; keep it explicit.
# Don't log debug unless you're troubleshooting.
}
# A lightweight health endpoint that doesn't hit your backends
yourdomain.example {
respond /healthz 200 {
body "ok\n"
}
# Optional: redirect everything else to the API hostname
redir https://api.yourdomain.example{uri} 308
}
api.yourdomain.example {
encode zstd gzip
# Security headers suitable for JSON APIs
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
X-Content-Type-Options "nosniff"
Referrer-Policy "no-referrer"
Permissions-Policy "geolocation=(), microphone=(), camera=()"
-Server
}
# Access log to a dedicated file
log {
output file /var/log/caddy/api-access.log {
roll_size 25MiB
roll_keep 10
roll_keep_for 168h
}
format json
}
reverse_proxy 127.0.0.1:9001 {
header_up X-Forwarded-Proto {scheme}
header_up X-Forwarded-Host {host}
header_up X-Real-IP {remote_host}
# Timeouts that fail fast during incidents
transport http {
dial_timeout 3s
response_header_timeout 10s
}
}
}
admin.yourdomain.example {
encode zstd gzip
# Slightly stricter headers for a UI
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
X-Frame-Options "DENY"
X-Content-Type-Options "nosniff"
Referrer-Policy "strict-origin-when-cross-origin"
-Server
}
log {
output file /var/log/caddy/admin-access.log {
roll_size 25MiB
roll_keep 10
roll_keep_for 168h
}
format json
}
# Basic IP allow-list example (edit before using)
@notAllowed not remote_ip 203.0.113.10 198.51.100.0/24
respond @notAllowed 403
reverse_proxy 127.0.0.1:9002
}
CF
Validate before you reload (this catches typos immediately):
sudo caddy validate --config /etc/caddy/Caddyfile
Expected output:
Valid configuration
Reload without dropping connections:
sudo systemctl reload caddy
sudo systemctl status caddy --no-pager
Open firewall ports safely (and avoid locking yourself out)
Caddy needs inbound 80/443. With UFW:
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw status
If you’re tightening SSH at the same time, the bastion pattern stays the cleanest approach once you have more than a couple of servers. This guide is worth bookmarking: SSH Bastion Host Setup: ProxyJump, MFA, and audit logs.
Verify automatic HTTPS (what “good” looks like)
The first request usually triggers certificate issuance (unless Caddy already has cached certs). From your laptop, run:
curl -I https://api.yourdomain.example/v1/ping
curl -I https://admin.yourdomain.example/
In the response headers, you want to see:
HTTP/2 200(orHTTP/3later if enabled),strict-transport-security,- no
Server:header (we removed it).
Then confirm the certificate issuer and validity dates:
echo | openssl s_client -servername api.yourdomain.example -connect api.yourdomain.example:443 2>/dev/null | openssl x509 -noout -issuer -subject -dates
If issuance fails, it’s almost always DNS pointing at the wrong IP or port 80 blocked during HTTP-01 validation.
Log handling: keep evidence without filling the disk
Caddy makes file logging easy, which is both helpful and dangerous if you never check retention. The config above uses built-in rolling, and that’s usually enough for a small VPS. Still, align journald, web logs, and app logs with the disk you actually have and the retention you actually need.
For a broader baseline, see VPS log rotation best practices in 2026. If you centralize logs, log shipping with Vector to OpenSearch works nicely with Caddy’s JSON format.
Quick verification:
sudo ls -lh /var/log/caddy/
sudo tail -n 3 /var/log/caddy/api-access.log
Common pitfalls (the ones that burn time)
- DNS still points at the old server. Check with
dig +short api.yourdomain.exampleand compare to your VPS IP. - Port 80 blocked. Even if you only care about HTTPS, ACME HTTP-01 needs port 80 reachable during issuance. Allow 80 or switch to DNS-01 (more work, but useful in locked-down environments).
- Backend listens on 0.0.0.0. Keep internal services on
127.0.0.1unless you have a real reason. It reduces blast radius. - Overly strict headers break the admin UI. If your UI loads third-party assets, a strict CSP can block it. Add CSP later once you’ve audited what the UI loads.
- IP allow-list blocks you. Test allow-lists from a second session or a known IP. Treat it like firewall work: one mistake can cut off access.
Rollback plan (fast and boring)
Rollback works best when you keep it simple. These are the two options you’ll actually use.
Rollback option A: revert the Caddyfile
- Restore the backup:
sudo ls -1 /etc/caddy/Caddyfile.bak.* | tail
sudo cp -a /etc/caddy/Caddyfile.bak.YYYY-MM-DD-HHMM /etc/caddy/Caddyfile
- Validate and reload:
sudo caddy validate --config /etc/caddy/Caddyfile
sudo systemctl reload caddy
Rollback option B: stop Caddy and bring back your previous web server
If Nginx/Apache was serving 80/443 before, stop Caddy and restart the old service:
sudo systemctl stop caddy
sudo systemctl start nginx 2>/dev/null || true
sudo systemctl start apache2 2>/dev/null || true
sudo ss -lntp | egrep ':80|:443' || true
DNS TTL can make rollback feel slower than it is if you moved records. For safer cutovers, lower TTL a day ahead of time.
Performance and reliability notes (what to tune first)
On small services, reliability usually comes from keeping the edge stable and easy to observe.
- Compression:
encode zstd gzipsaves bandwidth. JSON often shrinks dramatically with gzip; zstd helps modern clients. - Timeouts: short dial/header timeouts prevent hung upstreams from tying up worker capacity.
- Keep backends private: bind them to localhost and let Caddy handle public exposure.
If you’re chasing latency spikes, don’t guess. Trace. The eBPF workflow in Linux VPS monitoring with eBPF helps you separate network stalls, CPU contention, and application lockups.
HostMyCode notes (where this fits)
For an API plus an admin UI on one machine, a VPS is usually the cleanest starting point. You get predictable networking, full control of ports 80/443, and the freedom to run Caddy, systemd services, and your app stack without platform constraints.
If you’d rather offload OS patch cadence and baseline hardening while keeping server control, managed VPS hosting is the practical upgrade. If you prefer running everything yourself, start with a standard HostMyCode VPS and keep the config in version control.
If you’re putting Caddy in front of production apps, leave headroom for TLS handshakes, logs, and the next service you’ll add. A HostMyCode VPS is a solid base. If you want updates and baseline hardening handled for you, managed VPS hosting keeps the ops work from piling up.
FAQ: Caddy reverse proxy on a VPS
Do I need Certbot with Caddy?
No. Caddy obtains and renews certificates automatically via ACME when you use hostnames and listen on 80/443. Certbot is usually redundant in this setup.
Can I proxy WebSockets and SSE through Caddy?
Yes. Caddy supports WebSockets and server-sent events through reverse_proxy without special modules in most cases. Verify with a real client and watch timeouts if connections are long-lived.
What’s the safest way to restrict an admin subdomain?
Start with an IP allow-list (like the example in the Caddyfile) or require an identity-aware proxy in front. Don’t rely on obscurity or a hidden URL path.
Where should I look if HTTPS issuance fails?
Check DNS first, then confirm inbound 80/443 reachability. On the server, inspect logs:
sudo journalctl -u caddy -n 200 --no-pager
Next steps (small improvements with high payoff)
- Add DNS-01 validation if your environment can’t expose port 80.
- Move admin authentication to SSO (OIDC) or an identity-aware proxy if the UI is sensitive.
- Centralize logs if you operate multiple VPS nodes (Caddy’s JSON logs are easy to ship and index).
- Write a restore plan for configs and app data so a rebuild is measured in minutes. If you want a strong pattern, adapt the verification ideas in VPS backup strategy with restic + S3 (verify the restore, not just the backup job).
Summary
A Caddy reverse proxy on a VPS gives you automatic HTTPS, clean routing, and sane defaults without turning the edge into its own project. Keep backends on localhost, validate configs before reloads, and treat logging as part of the design—not an afterthought.
If you’re building on new infrastructure, start with a right-sized HostMyCode VPS. If you want to hand off patching and baseline hardening while keeping full server control, managed VPS hosting is the lower-maintenance option.