Back to blog
Blog

Linux VPS hardening checklist in 2026: SSH, firewall, updates, and audit-ready defaults

Linux VPS hardening checklist in 2026 with SSH, firewall, updates, and validation steps you can copy to production.

By Anurag Singh
Updated on Apr 16, 2026
Category: Blog
Share article
Linux VPS hardening checklist in 2026: SSH, firewall, updates, and audit-ready defaults

You can lock down a server for months and still lose it in one afternoon: one sloppy SSH setting, a firewall rule you forgot you added, or a security update that installed but never took effect because nothing restarted. This Linux VPS hardening checklist is written for how VPSs look in 2026—systemd almost everywhere, nftables under the hood, and “show me your controls” expectations even for small services.

This isn’t a security manifesto. It’s a baseline you can apply to common VPS setups: a Node/Go API, a small SaaS admin panel, a Postgres-backed internal tool, or a personal blog that still processes payments. Each section stays concrete, includes commands you can run to confirm the change, and calls out rollback options so you can move one step at a time without cutting off remote access.

Scope, assumptions, and what “hardened” means here

This checklist assumes a single internet-exposed Linux VPS running a few services, usually behind a reverse proxy. The goal is to close the most common compromise paths: weak remote access, accidental network exposure, missing updates (or missing restarts), and poor visibility when something goes wrong.

  • Audience: intermediate developers and sysadmins who can SSH into a server and edit config files safely.
  • Tested patterns: Debian 12 / Ubuntu 24.04 LTS / AlmaLinux 9–10 class systems, systemd, OpenSSH, nftables or ufw front-ends.
  • Not covered: Kubernetes, complex multi-tenant environments, and deep app-layer security (authz, SSRF, etc.).

Prerequisites (do these before you touch SSH)

Most “hardening failures” are self-inflicted: you apply a change, the connection drops, and now you’re locked out. Set up your escape hatch first, then start editing.

  • Console access via your VPS provider (web console / VNC / rescue mode) or an out-of-band method.
  • A second SSH session open (keep it logged in) while you test changes in the first session.
  • Root access (or sudo) and a basic editor (nano/vim).
  • Document your current state: sshd_config, firewall rules, listening ports.

If you’re building a fresh baseline, it’s often faster to start from a clean instance than to retrofit a long-lived machine. A small HostMyCode VPS is plenty for a hardened reverse proxy + app stack, and you can scale later without revisiting the fundamentals.

Linux VPS hardening checklist: 12 items you can validate

These are “must be true” statements you can check on demand. You don’t need every advanced control on day one, but you do want a baseline you can enforce and re-verify after every change.

  1. SSH uses keys, not passwords, and root login is disabled.
  2. You’ve changed the SSH port only if you can manage it reliably (optional).
  3. The firewall defaults to deny inbound, allow outbound, and explicitly permits only required ports.
  4. Only required services listen on public interfaces.
  5. Automatic security updates exist and you have a reboot/restart plan.
  6. Time sync works (TLS and logs depend on it).
  7. Logs are persisted long enough for incident work, with sane retention.
  8. Basic brute-force noise is rate-limited (at SSH and/or firewall level).
  9. File permissions and sudo access follow least privilege.
  10. Backups exist and are test-restored (a security control, not just ops hygiene).
  11. Integrity/audit tooling runs periodically (even lightweight).
  12. You can prove the above quickly with a short verification script.

1) Inventory: what’s listening, what’s installed, what users exist

Before you change anything, take a quick snapshot of the current exposure. You’ll rerun these commands after hardening to confirm what actually changed.

sudo ss -tulpen
sudo systemctl --type=service --state=running
getent passwd | awk -F: '$3 >= 1000 {print $1 ":" $7}'
sudo last -a | head

What you want to see: only the ports you expect (often 22, plus 80/443 for web) and a short list of services.

Red flag: databases (Postgres/MySQL/Redis) listening on 0.0.0.0 or :: unless you deliberately expose them behind real access controls.

2) Create a non-root admin user with tight sudo

If you still log in as root, fix that first. Create a dedicated admin account and require sudo for privileged work.

sudo adduser deploy
sudo usermod -aG sudo deploy
sudo visudo

In /etc/sudoers (or better: /etc/sudoers.d/deploy), keep it readable and boring:

# /etc/sudoers.d/deploy
Defaults:deploy !requiretty
deploy ALL=(ALL) ALL

Verification:

su - deploy
sudo -l

Pitfall: avoid passwordless sudo unless you have a clear reason and compensating controls (MFA on SSH, short-lived certs, strong session auditing). Convenience cuts both ways.

3) SSH: keys only, no root, and predictable settings

SSH is still the front door for most VPS compromises. The baseline is straightforward: key auth only, no password prompts, no root login, and an explicit allow-list for users.

On your workstation, generate a modern key if you don’t already have one. Ed25519 is the default choice for most environments.

ssh-keygen -t ed25519 -a 64 -f ~/.ssh/hmc-deploy-2026

Copy the public key to the server (you can use a temporary password login if needed, but plan to remove it):

ssh-copy-id -i ~/.ssh/hmc-deploy-2026.pub deploy@your.server.ip

Edit /etc/ssh/sshd_config. Make your intent explicit—don’t lean on commented defaults.

# /etc/ssh/sshd_config (snippet)
Port 22
Protocol 2
PermitRootLogin no
PasswordAuthentication no
KbdInteractiveAuthentication no
PubkeyAuthentication yes
AuthenticationMethods publickey
AllowUsers deploy
X11Forwarding no
PermitTunnel no
AllowTcpForwarding no
ClientAliveInterval 300
ClientAliveCountMax 2

Apply safely: validate the config before you reload.

sudo sshd -t
sudo systemctl reload ssh

Verification: open a new terminal and log in with the key.

ssh -i ~/.ssh/hmc-deploy-2026 deploy@your.server.ip

Expected outcome: key login works; password auth does not. Root login should fail.

Rollback: if you lose access, use your provider console/rescue mode to revert /etc/ssh/sshd_config, run sshd -t, then restart SSH.

If you want a stricter SSH access model (ProxyJump, better audit trails, optional MFA), pair this checklist with SSH bastion host setup with ProxyJump, MFA, and audit logs.

4) Firewall: default deny inbound, then add only what you need

Pick a firewall you can understand at a glance. nftables is the standard underneath modern distros, but ufw/firewalld are fine if your team already uses them. The best firewall is the one you’ll maintain consistently.

Option A: ufw (Ubuntu/Debian style)

sudo apt-get update
sudo apt-get install -y ufw

sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 22/tcp comment 'SSH'
sudo ufw allow 80/tcp comment 'HTTP'
sudo ufw allow 443/tcp comment 'HTTPS'
sudo ufw enable
sudo ufw status verbose

Expected output: Status active, with only the ports you allowed.

Option B: nftables (direct, audit-friendly)

sudo apt-get install -y nftables
sudo systemctl enable --now nftables
sudo nano /etc/nftables.conf
# /etc/nftables.conf (minimal, IPv4+IPv6)
flush ruleset

table inet filter {
  chain input {
    type filter hook input priority 0;
    policy drop;

    ct state established,related accept
    iif lo accept

    # ICMP helps with MTU and basic network health
    ip protocol icmp accept
    ip6 nexthdr icmpv6 accept

    # SSH
    tcp dport 22 ct state new accept

    # Web
    tcp dport {80, 443} ct state new accept

    # Drop the rest
    counter drop
  }

  chain forward {
    type filter hook forward priority 0;
    policy drop;
  }

  chain output {
    type filter hook output priority 0;
    policy accept;
  }
}
sudo nft -c -f /etc/nftables.conf
sudo systemctl restart nftables
sudo nft list ruleset

Verification from your laptop:

nmap -Pn -p 1-1024 your.server.ip

What you want to see: only 22/80/443 open (or fewer, if you don’t serve HTTP).

Pitfall: forgetting IPv6. If your VPS has AAAA DNS and you only firewall IPv4, you still have an exposed server.

For a versioned, rollback-friendly approach with logging and rate limits, see VPS firewall logging with nftables.

5) Reduce attack surface: bind internal services to localhost

This is one of the highest-ROI steps in the whole list. Many incidents start with a stray debug listener or a database bound to the world.

Check what’s listening publicly:

sudo ss -tulpen | awk '{print $1, $5, $7}'

If you run Postgres, bind it to localhost unless you intentionally expose it:

# /etc/postgresql/16/main/postgresql.conf
listen_addresses = '127.0.0.1,::1'
sudo systemctl restart postgresql
sudo ss -tulpen | grep 5432

Expected: 127.0.0.1:5432 and/or [::1]:5432, not 0.0.0.0:5432.

6) Patch posture: unattended security updates plus controlled reboots

“Install updates” is a suggestion, not a process. Your baseline needs three things: automatic security updates, a way to verify what happened, and a clear signal when a reboot is required.

Debian/Ubuntu approach:

sudo apt-get update
sudo apt-get install -y unattended-upgrades needrestart
sudo dpkg-reconfigure -plow unattended-upgrades

Then confirm the timer is active:

systemctl list-timers | grep -E 'unattended|apt'

Expected: a daily or periodic unattended upgrades timer.

Reboot visibility:

if [ -f /var/run/reboot-required ]; then cat /var/run/reboot-required; fi

If you want a safer lifecycle (staging, snapshots, predictable rollback), the runbook is in VPS patch management in 2026.

7) Time sync and DNS hygiene (small settings, big impact)

Bad time breaks TLS, scrambles incident timelines, and occasionally trips package signature checks. For most VPS setups, systemd-timesyncd is enough.

timedatectl status
sudo systemctl enable --now systemd-timesyncd

Expected: System clock synchronized: yes

For DNS, avoid “mystery resolvers.” Confirm what the host actually uses:

resolvectl status

If you run production services, keep DNS records consistent and owned. TLS automation and email deliverability both depend on it. HostMyCode’s domain and DNS management is at HostMyCode domains.

8) Logging you can use: persistence, rotation, and shipping

Logs are your timeline. They’re also a common cause of outages when nobody caps disk usage. Treat logging as part visibility, part capacity management.

Journald persistence: make logs survive reboots if you need auditability.

sudo nano /etc/systemd/journald.conf
# /etc/systemd/journald.conf (snippet)
Storage=persistent
SystemMaxUse=1G
RuntimeMaxUse=256M
MaxRetentionSec=30day
sudo systemctl restart systemd-journald
journalctl --disk-usage

Expected: disk usage stays bounded, and logs persist.

If you want central visibility without running a large stack, Loki is a solid option. The setup guide is here: VPS log shipping with Loki. For local retention planning, see VPS log rotation best practices.

9) Basic abuse resistance: rate-limit SSH and noisy ports

You won’t stop scanning. You can stop it from becoming expensive—CPU spikes, giant logs, and endless auth noise.

Firewall-level rate limiting with nftables (example)

# add inside chain input, before accepting SSH
tcp dport 22 ct state new limit rate 15/minute burst 30 packets accept

Alternatively, if you already run Fail2Ban and you’re willing to maintain it, it still works well for SSH and common web auth endpoints. Keep jails explicit, and avoid relying on unclear defaults.

10) Service hardening with systemd sandboxing (low effort, real payoff)

Most VPS workloads live in systemd units. You can tighten the boundary without containers by adding a handful of unit options. It won’t fix app bugs, but it does reduce blast radius.

Example: a small internal API running on 127.0.0.1:9001 behind Nginx.

sudo nano /etc/systemd/system/inventory-api.service
[Unit]
Description=Inventory API
After=network-online.target
Wants=network-online.target

[Service]
User=inventory
Group=inventory
WorkingDirectory=/srv/inventory-api
ExecStart=/srv/inventory-api/bin/inventory-api --listen 127.0.0.1:9001
Restart=on-failure
RestartSec=2

# Sandboxing
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ProtectKernelTunables=true
ProtectKernelModules=true
ProtectControlGroups=true
LockPersonality=true
MemoryDenyWriteExecute=true
RestrictSUIDSGID=true
RestrictNamespaces=true
RestrictRealtime=true
SystemCallArchitectures=native

# Allow writes only where needed
ReadWritePaths=/srv/inventory-api /var/lib/inventory-api

[Install]
WantedBy=multi-user.target
sudo useradd --system --home /var/lib/inventory-api --shell /usr/sbin/nologin inventory
sudo mkdir -p /srv/inventory-api /var/lib/inventory-api
sudo chown -R inventory:inventory /srv/inventory-api /var/lib/inventory-api

sudo systemctl daemon-reload
sudo systemctl enable --now inventory-api
sudo systemctl status inventory-api --no-pager

Expected: the service runs, and it fails loudly if the app tries to write outside the allowed paths. That friction is a feature.

If you want the “self-healing but safe” pattern, add watchdog health checks and rollback habits from systemd watchdog on a VPS.

11) Backups as a security control (ransomware and operator error)

If an attacker gets in, deleting backups or encrypting reachable data is often the next step. Your baseline should include off-host backups and routine restore tests, not just “we have a backup job.”

A practical pattern for VPS is restic + S3-compatible storage, plus verification and periodic test restores. The full guide is here: Linux VPS backup strategy with restic + S3.

If you’d rather keep the platform side simple, a managed VPS plan can take the edge off backup scheduling, monitoring, and patch cadence. HostMyCode’s managed VPS hosting fits when you want the baseline enforced without babysitting it every week.

12) Quick verification: a small “prove it” script you can rerun

Hardening only sticks if you can re-check it quickly. Drop a tiny verification script on the server and run it after changes, after reboots, and after provisioning new hosts.

sudo nano /usr/local/sbin/hardening-check.sh
sudo chmod +x /usr/local/sbin/hardening-check.sh
#!/usr/bin/env bash
set -euo pipefail

echo "== SSH settings =="
sshd -T | egrep '^(port|permitrootlogin|passwordauthentication|kbdinteractiveauthentication|pubkeyauthentication|allowusers) '

echo
echo "== Listening sockets (public) =="
ss -tulpen | awk 'NR==1 || $5 ~ /0\.0\.0\.0:|\[::\]:/ {print}'

echo
echo "== Firewall status =="
if command -v ufw >/dev/null 2>&1; then
  ufw status verbose
elif command -v nft >/dev/null 2>&1; then
  nft list ruleset | sed -n '1,160p'
else
  echo "No ufw/nft found"
fi

echo
echo "== Updates/reboot required =="
if [ -f /var/run/reboot-required ]; then
  echo "REBOOT REQUIRED: $(cat /var/run/reboot-required)"
else
  echo "No reboot flag present"
fi

echo
echo "== Time sync =="
timedatectl status | egrep 'System clock synchronized|NTP service'

Run it:

sudo /usr/local/sbin/hardening-check.sh

What good looks like: password auth off, root login off, only expected public listeners, firewall active, clock synced.

Common pitfalls (and how to avoid them)

  • Locking yourself out with firewall rules: always allow SSH before enabling a default-deny policy. Keep an active second session while you test.
  • Forgetting IPv6: either fully firewall IPv6 or disable it intentionally. Half-configured IPv6 is a common accidental exposure.
  • Turning on unattended upgrades and never rebooting: a patched kernel you haven’t booted into is still an old kernel. Track reboots deliberately.
  • Breaking services with systemd sandboxing: start with NoNewPrivileges and PrivateTmp, then add stricter options while watching logs.
  • Logging everything until the disk fills: set journald limits, validate logrotate, and check disk usage weekly.

Rollback plan (keep it boring, keep it written down)

You don’t need an elaborate rollback plan for baseline hardening. You do need something you can follow half-asleep.

  1. SSH rollback: provider console → revert /etc/ssh/sshd_configsshd -tsystemctl restart ssh.
  2. Firewall rollback: if nftables blocks you, console in and temporarily flush: nft flush ruleset (then fix and reapply). For ufw: ufw disable.
  3. Service rollback: keep a copy of the prior systemd unit: cp inventory-api.service inventory-api.service.bak. If sandboxing breaks the app, revert and reload daemon.
  4. Package rollback: snapshot before major changes if your provider supports it. If you can’t snapshot, keep changes small and staged.

If you want a clean operational model for changes (including traffic cutover and safe rollbacks), the patterns in Linux VPS blue-green deployment with systemd + Nginx transfer nicely to security changes as well.

Next steps (after the baseline is stable)

  • Add monitoring with alerts that aren’t noisy: start with CPU, memory, disk, and a blackbox check for your public endpoints. Use Prometheus + Grafana monitoring if you want control, or a lighter hosted approach if you don’t.
  • Ship logs off-host: local logs are easy to wipe. Central log shipping improves incident response.
  • Run periodic audits: use a tool like Lynis for baseline drift and track results. The practical workflow is in Linux VPS security auditing in 2026.
  • Separate environments: don’t host staging/admin tools on the same machine as production customer data unless you can enforce strong isolation.

Summary: a repeatable baseline beats a “perfect” one

The security posture that matters is the one you can keep. Apply the checklist, verify it with the script, and re-check after every major change or OS upgrade. If you provision hosts often, codify the baseline in automation and treat servers as replaceable.

For teams that want this baseline without spending time on OS upkeep, start with HostMyCode VPS for full control, or choose managed VPS hosting when you’d rather stay focused on the application. Either way, the checklist stays the same—and that consistency is the point.

If you’re standardizing new servers, start with a HostMyCode VPS so you control SSH, firewall rules, and update cadence from day one. If you want the hardening fundamentals maintained with less hands-on work, managed VPS hosting is a practical middle ground.

FAQ

Should I change the SSH port in 2026?

It reduces bot noise but doesn’t replace real controls. If changing the port complicates automation or onboarding, keep 22 and focus on keys-only auth, no root login, and rate limiting.

Is ufw “good enough” compared to nftables?

Yes, if you keep it simple and you verify the resulting rules. nftables gives you more precise logging and rate-limiting patterns, but ufw is easier to maintain on small teams.

What’s the minimum set of open ports for a typical web app VPS?

Usually TCP 22 (SSH), 80 (HTTP), and 443 (HTTPS). Many setups can even drop 80 once HTTPS is enforced, but keep it if you rely on ACME HTTP-01 challenges.

How do I confirm I didn’t accidentally expose my database?

Run ss -tulpen and look for 0.0.0.0:5432 (Postgres) or similar. You want databases bound to 127.0.0.1 or a private interface, not all interfaces.

What’s the fastest way to detect baseline drift over time?

Rerun a verification script (like the one above) and schedule periodic audits (Lynis) plus monitoring. Drift usually appears as new listening ports, disabled updates, or relaxed SSH settings.