Back to blog
Blog

UFW Firewall Setup for a VPS in 2026: A Practical Hardening Playbook (SSH-Safe)

UFW firewall setup for a VPS in 2026 with SSH-safe rules, verification, rollback, and common pitfalls.

By Anurag Singh
Updated on Apr 13, 2026
Category: Blog
Share article
UFW Firewall Setup for a VPS in 2026: A Practical Hardening Playbook (SSH-Safe)

A firewall mistake can lock you out faster than any attacker ever will. The upside is that UFW firewall setup for a VPS stays boring and repeatable if you treat it like a proper change window: decide the rules, confirm you still have access, then enable—only after you’ve staged a rollback.

This playbook is for developers and sysadmins running a small-to-mid production box: one VPS hosting an API plus a couple of admin endpoints. Examples use UFW on Ubuntu Server 24.04 LTS (a common pick in 2026), but the same sequence works on most Debian-based images.

Scenario and goals (what you’re building)

You have a VPS that runs:

  • SSH on 22/tcp (we’ll optionally switch to 2222/tcp later)
  • Nginx serving TLS on 443/tcp and redirecting on 80/tcp
  • A private admin service bound to localhost (no public port)
  • Optional: WireGuard for maintenance access later

The end state is straightforward: default-deny inbound, allow only what you can defend, and confirm you can still manage the host after every change.

Prerequisites

  • A VPS with sudo access (root or a sudo user)
  • Console access via your provider (web console / VNC / rescue console) for emergencies
  • Ubuntu Server 24.04 LTS (or similar), systemd, OpenSSH installed
  • Your admin IP(s) (home/office/VPN egress). If dynamic, you’ll use a wider CIDR or an alternate access method.

If you’re still picking infrastructure, start with a HostMyCode VPS so firewall changes, snapshots, and recovery access live in one place. If you’d rather not own the sharp edges on production, managed VPS hosting is the safer choice.

Step 1 — Confirm what’s listening before you touch the firewall

Don’t work from assumptions. Check what the host is actually exposing.

sudo ss -lntup

Expected output includes something like:

LISTEN 0 4096 0.0.0.0:22      0.0.0.0:*  users:(("sshd",pid=...,fd=...))
LISTEN 0 511  0.0.0.0:80      0.0.0.0:*  users:(("nginx",pid=...,fd=...))
LISTEN 0 511  0.0.0.0:443     0.0.0.0:*  users:(("nginx",pid=...,fd=...))
LISTEN 0 2048 127.0.0.1:9100  0.0.0.0:*  users:(("my-admin",pid=...,fd=...))

If anything surprises you (say 0.0.0.0:3306 for MySQL), fix the service binding first. A firewall can paper over it, but you’ll sleep better when the daemon is scoped correctly.

Related reading if you’re building for clean restarts and fewer surprises: systemd socket activation for FastAPI is a solid pattern.

Step 2 — Install UFW and set a predictable baseline

UFW is often already present on Ubuntu. Confirm, then install if needed.

sudo apt update
sudo apt install -y ufw

Check status:

sudo ufw status verbose

You’ll often see:

Status: inactive

Set defaults before you add allow rules. These two lines prevent “temporary” exceptions from becoming permanent exposure.

sudo ufw default deny incoming
sudo ufw default allow outgoing

For most VPS roles, outbound allow is the sensible starting point. If you run a tightly controlled environment (PCI-style), lock down outbound later—after you’ve confirmed what your services actually need.

Step 3 — Add SSH rules first (and make them IP-scoped)

This is the safety-critical step: allow SSH before you enable UFW, and restrict it to your admin IPs.

Example: your office IP is 203.0.113.10.

sudo ufw allow from 203.0.113.10 to any port 22 proto tcp comment 'SSH from office'

If you have a second maintenance location:

sudo ufw allow from 198.51.100.24 to any port 22 proto tcp comment 'SSH from home'

Verify the rules are queued:

sudo ufw status numbered

Expected output resembles:

[ 1] 22/tcp ALLOW IN 203.0.113.10
[ 2] 22/tcp ALLOW IN 198.51.100.24

If your IP is dynamic: widen to a CIDR you can live with for a short period (for example, your VPN provider’s egress range), then tighten it once you’re stable. If you’d rather avoid public SSH altogether, use a controlled tunnel instead. The HostMyCode guide on reverse SSH tunnel access walks through a safer pattern for remote reachability.

Step 4 — Allow the public web ports you actually need

A typical TLS-first service needs 80 and 443. Keep these rules explicit and easy to read.

sudo ufw allow 80/tcp comment 'HTTP (redirect to HTTPS)'
sudo ufw allow 443/tcp comment 'HTTPS'

If you run an API on a separate port during development (say 8081), don’t “open it for a day” and forget it. Put Nginx in front, or keep the service bound to 127.0.0.1.

Step 5 — (Optional) Create a safe “break-glass” SSH rule with a time limit

Sometimes you need emergency access from anywhere: a new ISP, a broken VPN, a hotel network. If you do that, make the exception expire automatically.

UFW doesn’t do time-based rules, but a systemd timer can remove a rule on schedule.

Add the temporary rule:

sudo ufw allow 22/tcp comment 'TEMP break-glass SSH (remove ASAP)'

Create a one-shot cleanup script:

sudo tee /usr/local/sbin/ufw-remove-breakglass >/dev/null <<'EOF'
#!/usr/bin/env bash
set -euo pipefail
# Remove the first matching rule for 22/tcp with the comment string.
# If you run multiple similar rules, switch to numbered deletion.
ufw status numbered | awk '/TEMP break-glass SSH/ {print $1}' | tr -d '[]' | head -n1 | while read -r n; do
  [ -n "$n" ] && yes | ufw delete "$n"
done
EOF
sudo chmod +x /usr/local/sbin/ufw-remove-breakglass

Then a timer to run it in 30 minutes:

sudo tee /etc/systemd/system/ufw-remove-breakglass.service >/dev/null <<'EOF'
[Unit]
Description=Remove temporary break-glass SSH UFW rule

[Service]
Type=oneshot
ExecStart=/usr/local/sbin/ufw-remove-breakglass
EOF

sudo tee /etc/systemd/system/ufw-remove-breakglass.timer >/dev/null <<'EOF'
[Unit]
Description=Run break-glass SSH rule cleanup

[Timer]
OnActiveSec=30min
Persistent=false

[Install]
WantedBy=timers.target
EOF

sudo systemctl daemon-reload
sudo systemctl enable --now ufw-remove-breakglass.timer

If you manage more than one server, this is the kind of guardrail that prevents “I’ll clean it up later” from becoming next month’s incident.

Step 6 — Enable UFW without cutting your own access

Before you enable anything, open a second SSH session and keep both connected. If one gets dropped, you may still have time to fix the rules from the other.

Enable UFW:

sudo ufw enable

Expected prompt and output:

Command may disrupt existing ssh connections. Proceed with operation (y|n)? y
Firewall is active and enabled on system startup

Then check status immediately:

sudo ufw status verbose

You want Status: active plus your allow rules exactly as intended.

Step 7 — Verify from the outside (don’t trust local checks)

ss tells you what’s listening. It doesn’t prove what’s reachable. Test from another machine.

From your laptop (or a CI box), scan only the ports you expect to be open:

nmap -Pn -p 22,80,443 YOUR_SERVER_IP

Expected output should show open for allowed ports and filtered (or closed) for everything else. Example:

PORT    STATE    SERVICE
22/tcp  open     ssh
80/tcp  open     http
443/tcp open     https

Also check the application path, not just the socket:

curl -I http://YOUR_DOMAIN
curl -I https://YOUR_DOMAIN

Expected: 301 or 308 from HTTP to HTTPS, then a 200 (or your app’s normal status) over HTTPS.

Step 8 — Add hardening rules that matter in real operations

For many VPS roles, the basics already do the job. A few small tweaks can still reduce noise and catch common failure modes.

Rate-limit SSH to slow brute-force noise

This won’t replace Fail2Ban, but it cuts down log spam and makes credential stuffing less efficient.

sudo ufw limit 22/tcp comment 'Rate-limit SSH'

Important: if SSH is already restricted to specific IPs, you may not want a global limit rule at all. Keep the ruleset understandable; “secure” and “mysterious” are not the same thing.

Allow ICMP selectively (optional)

Some teams block ping. Others keep it because diagnostics get harder without it. If your uptime checks depend on ICMP, allow it. UFW handles this via /etc/ufw/before.rules. If you don’t have a strong reason to change it, leave the defaults alone.

Log what you drop (without drowning in logs)

On small VPS instances, low logging is usually the sweet spot.

sudo ufw logging low

Then inspect:

sudo journalctl -u ufw --since '10 min ago' | tail -n 50

If you want alerts that actually help during an incident, pair firewall logs with audit trails. The HostMyCode post on auditd monitoring and alerting complements this setup well.

Step 9 — Protect common “oops” exposures (databases, admin ports, metrics)

Most VPS problems aren’t exotic. They’re self-inflicted: Redis on 6379, Postgres on 5432, Prometheus exporters on 9100—all accidentally reachable from the internet.

Two approaches work well:

  • Bind privately: keep services on 127.0.0.1 or a private interface.
  • Firewall deny: explicitly deny the port from the internet (or allow only from a private subnet/VPN).

If you must expose Postgres to a single app server IP (203.0.113.50):

sudo ufw allow from 203.0.113.50 to any port 5432 proto tcp comment 'Postgres from app node'

Everything else stays blocked by default deny.

If you’re hosting databases and want a more managed posture, consider HostMyCode database hosting so you can separate concerns: app on one node, data on another, with clear firewall policy between them.

Common pitfalls (and how to avoid them)

  • Enabling UFW before allowing SSH: you’ll lock yourself out unless you have console access. Always add SSH allow rules first.
  • Forgetting IPv6: if your VPS has IPv6 enabled, confirm UFW is managing it. Check /etc/default/ufw for IPV6=yes. If you ignore IPv6, services can stay reachable over v6 while v4 looks “locked down.”
  • Opening a dev port permanently: a temporary 8080 has a way of surviving into production. Put a reverse proxy in front instead.
  • Overusing “allow from anywhere”: IP-scoped rules reduce noise and risk. Keep a documented break-glass path instead of leaving broad rules behind.
  • Messy rule order: UFW processes rules in order. Use ufw status numbered and keep the list short and intentional.

Rollback plan (do this before you need it)

If you lose access, recovery depends on what you prepared ahead of time:

  1. Provider console: log in via the VPS console and disable UFW.
    sudo ufw disable
  2. Revert to a known-good ruleset: if you exported rules, restore them (see below).
  3. Snapshot rollback: if you took a snapshot before hardening, roll back at the provider layer. It’s blunt, but it works.

Export your UFW rules now so rollback isn’t guesswork:

sudo ufw status verbose > /root/ufw-status-$(date +%F).txt
sudo cp -a /etc/ufw /root/ufw-etc-backup-$(date +%F)

To restore the config directory (after disabling UFW), you can copy back and re-enable:

sudo ufw disable
sudo rsync -a /root/ufw-etc-backup-YYYY-MM-DD/ufw/ /etc/ufw/
sudo ufw enable

If you want a fuller disaster recovery runbook that includes restore testing and fast rollback, keep this HostMyCode guide handy: VPS disaster recovery planning in 2026.

Verification checklist (print this before you change prod)

  • You have console access (tested) or an out-of-band path
  • Two SSH sessions open before enabling UFW
  • ufw default deny incoming set
  • SSH allowed from your admin IP(s)
  • Only 80/tcp and 443/tcp open publicly (for a web VPS)
  • External scan confirms only intended ports are reachable
  • You saved a rules/config backup for rollback

Next steps after UFW (keep improving without overcomplicating)

  1. Move SSH to key-only + modern settings: disable password auth, set PermitRootLogin no, and consider moving SSH to a non-default port only if it fits your operational model.
  2. Add Fail2Ban or sshguard: UFW rate limiting helps, but application-aware bans are still useful.
  3. Set up a VPN for admin access: WireGuard lets you close SSH to the internet entirely in many environments.
  4. Baseline monitoring: watch for unexpected open ports and rule drift. A lightweight monitoring tool helps keep you honest.

If you’re planning bigger changes (new reverse proxy, swapping stacks, adding a DB node), do them on a staging VPS first. It’s cheaper than learning this lesson in production.

Summary

UFW firewall setup for a VPS is safest when you run it like a controlled rollout: inventory listeners, add IP-scoped SSH rules, allow only required services, verify externally, and keep a rollback plan that doesn’t depend on memory. That’s how you harden the box without triggering a self-inflicted outage.

For production workloads where you want stable networking and predictable recovery options, a HostMyCode VPS gives you a solid foundation. If you’d rather have OS and security maintenance handled as part of the service, managed VPS hosting is the practical upgrade.

If you’re tightening inbound access and standardizing firewall rules across servers, pick a VPS platform that makes console access and recovery straightforward. Start with a HostMyCode VPS, or hand off the operational load to managed VPS hosting so hardening work doesn’t turn into a late-night fire drill.

FAQ

Should I allow both 80 and 443, or only 443?

If you control the client behavior (internal services), you can run HTTPS-only. For public websites and APIs, keeping 80/tcp open for redirects reduces support issues and avoids weird client failures.

Can I use UFW on a Docker host?

Yes, but be careful. Docker manipulates iptables/nftables rules and can bypass naive firewall expectations. If the VPS is Docker-heavy, plan firewall policy with Docker networking in mind and verify reachability with external scans after every deployment change.

Will UFW break outbound package installs or Let’s Encrypt?

Not with the defaults in this guide (default allow outgoing). Certbot and apt need outbound HTTPS/DNS. If you later restrict outbound, explicitly allow DNS (53/udp and 53/tcp) and HTTPS (443/tcp) to the required destinations.

What’s the safest way to avoid getting locked out over SSH?

Add SSH allow rules first, keep a second session open, and confirm you have console access. If your risk tolerance is low, keep a time-limited break-glass rule that cleans itself up via systemd.

Do I still need a firewall if I already have cloud security groups?

Security groups are a strong first layer, but host-level rules catch mistakes and protect you if the perimeter config drifts. Defense-in-depth matters most on long-lived VPS instances where teams and configs change over time.