
Your Nginx server can be “up” and still be easy to abuse. The common issues aren’t exotic. They include permissive TLS, missing security headers, noisy endpoints, and missing request controls.
This Nginx security hardening tutorial gives you a baseline for Ubuntu 24.04/26.04, Debian 12, and AlmaLinux/Rocky 9—without turning your site into a wall of 403s.
Work in small, reversible steps. Change one thing, reload Nginx, verify, then continue.
This pace matters on WordPress, WooCommerce, and multi-site VPS setups. A “security fix” can quietly break logins, checkout, or third-party integrations.
If you want a clean environment for these steps, start with a HostMyCode VPS. Or choose managed VPS hosting if you prefer help with OS updates and service hardening.
What you’ll harden (and what you won’t)
This tutorial sticks to controls that improve security quickly on typical VPS hosting stacks:
- Reduce information leakage (version banners, directory quirks).
- Make HTTPS non-negotiable with modern TLS defaults.
- Block obvious abuse with rate limiting and request sizing.
- Fix common dangerous Nginx patterns (bad
try_files, accidental PHP exposure). - Add basic WAF coverage using ModSecurity 3 where it makes sense.
It does not replace application security work. If WordPress has a vulnerable plugin, headers won’t save you.
Treat these changes as the server-side seatbelt.
Prerequisites and a safe rollback plan
Before you edit anything, capture a known-good snapshot of your current config. You want a fast undo button.
- Root or sudo access
- Nginx installed (package or vendor repo)
- A domain pointed at the server
- SSL certificate (Let’s Encrypt or commercial)
Quick backup of Nginx config
sudo mkdir -p /root/backup-nginx
sudo cp -a /etc/nginx /root/backup-nginx/etc-nginx-$(date +%F)
Know your Nginx layout
Common paths:
- Ubuntu/Debian:
/etc/nginx/nginx.conf, sites in/etc/nginx/sites-available/+sites-enabled/ - AlmaLinux/Rocky (EPEL or nginx repo):
/etc/nginx/nginx.conf, vhosts often in/etc/nginx/conf.d/*.conf
Always test config before reload
sudo nginx -t
If it passes, reload without dropping connections:
sudo systemctl reload nginx
If you want a repeatable ops routine, pair hardening with monthly hygiene.
The VPS maintenance checklist fits well alongside this work.
Nginx security hardening tutorial step 1: remove version leaks and risky defaults
Start with low-risk wins. They won’t stop a targeted attacker. They do reduce automated fingerprinting and accidental exposure.
Hide Nginx version banner
Edit /etc/nginx/nginx.conf inside the http { } block:
server_tokens off;
Turn off unneeded methods (optional)
If you don’t use WebDAV or uncommon verbs, restrict requests to the basics.
Add inside your server { } block:
if ($request_method !~ ^(GET|HEAD|POST)$ ) {
return 405;
}
Pitfall: Some APIs legitimately require PUT/PATCH/DELETE. Use this only on static sites, or scope it to specific location blocks.
Disable directory listings
It’s usually off by default. Making it explicit avoids surprises:
autoindex off;
Verify
sudo nginx -t && sudo systemctl reload nginx
curl -I https://yourdomain.com | sed -n '1,10p'
You should not see a Server: nginx/1.x banner with a version number.
Lock down TLS: modern protocols, ciphers, OCSP stapling
TLS mistakes still cause real problems. They can expose you to downgrade attacks, trigger browser warnings, and leave weak crypto in place.
They also turn compliance into a time sink.
Create a reusable TLS include file
Create /etc/nginx/snippets/tls-modern.conf:
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
# Sensible TLS 1.3 defaults; TLS 1.2 suites depend on OpenSSL build.
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
# OCSP stapling (requires resolver and full chain)
ssl_stapling on;
ssl_stapling_verify on;
resolver 1.1.1.1 1.0.0.1 8.8.8.8 valid=300s;
resolver_timeout 5s;
Note: On Ubuntu 24.04/26.04, Debian 12, and RHEL 9-based systems, OpenSSL is modern enough for TLS 1.3 in standard builds.
Use strong DH params only if you actually use DHE (most don’t)
Most modern configs negotiate ECDHE. They never touch classic DHE.
If you have older requirements, generate DH params once:
sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048
Then add to the snippet only if needed:
# ssl_dhparam /etc/ssl/certs/dhparam.pem;
Apply the snippet to your HTTPS server block
Example vhost:
server {
listen 443 ssl http2;
server_name yourdomain.com www.yourdomain.com;
ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
include /etc/nginx/snippets/tls-modern.conf;
# ... rest of your config
}
Verify with OpenSSL
openssl s_client -connect yourdomain.com:443 -tls1_2 </dev/null 2>/dev/null | grep -E 'Protocol|Cipher'
openssl s_client -connect yourdomain.com:443 -tls1_3 </dev/null 2>/dev/null | grep -E 'Protocol|Cipher'
You should see TLS 1.2 and TLS 1.3 negotiate cleanly.
If TLS 1.2 fails, your config or OpenSSL build is too strict.
Security headers that don’t cause self-inflicted outages
Headers are easy to add. They’re also easy to get wrong.
Start with conservative defaults. Tighten them only after you’ve verified site behavior.
Create a headers snippet
Create /etc/nginx/snippets/security-headers-basic.conf:
# Avoid MIME sniffing
add_header X-Content-Type-Options "nosniff" always;
# Clickjacking protection
add_header X-Frame-Options "SAMEORIGIN" always;
# Referrer policy
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# Permissions policy (keep conservative)
add_header Permissions-Policy "geolocation=(), microphone=(), camera=()" always;
# HSTS (enable only when HTTPS is correct everywhere)
# add_header Strict-Transport-Security "max-age=15552000; includeSubDomains" always;
Apply per HTTPS vhost
Inside server { ... } (the 443 one), add:
include /etc/nginx/snippets/security-headers-basic.conf;
Pitfall: Don’t enable HSTS on day one if you still serve anything over HTTP, have legacy subdomains, or you’re mid-migration.
HSTS can pin browsers to a broken HTTPS setup for months.
Optional: add a CSP later
Content-Security-Policy can eliminate entire classes of injection issues.
It can also break payment flows, analytics, and embedded widgets.
When you’re ready, start with report-only mode. Tighten it in small steps.
Verify headers
curl -I https://yourdomain.com | grep -i -E 'x-content-type-options|x-frame-options|referrer-policy|permissions-policy|strict-transport-security'
Request controls: rate limiting, timeouts, and sane body sizes
On a typical VPS, most “attacks” look like brute-force logins, scanner traffic, and cheap bot floods. Nginx can absorb a lot of it.
It only helps if you set boundaries.
Define rate limit zones (global)
In /etc/nginx/nginx.conf inside http { }:
# Per-IP request rate
limit_req_zone $binary_remote_addr zone=req_per_ip:10m rate=10r/s;
# Concurrent connections per IP
limit_conn_zone $binary_remote_addr zone=conn_per_ip:10m;
Apply limits to sensitive locations
For WordPress, start with login and XML-RPC.
In your vhost:
location = /wp-login.php {
limit_req zone=req_per_ip burst=20 nodelay;
limit_conn conn_per_ip 10;
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php8.3-fpm.sock;
}
location = /xmlrpc.php {
limit_req zone=req_per_ip burst=10 nodelay;
return 403;
}
Pitfall: If you use Jetpack or external publishing tools, blocking /xmlrpc.php can break them.
In that case, keep it reachable and rate-limit it hard instead.
Set practical timeouts and body size
In the vhost (or globally if you understand the impact):
client_body_timeout 15s;
client_header_timeout 15s;
send_timeout 30s;
# Keep uploads realistic; raise only for sites that need it
client_max_body_size 32m;
Verify behavior quickly
Upload a file larger than your limit and confirm you get 413 Request Entity Too Large.
Only raise the limit where it’s truly needed (for example, a WooCommerce import endpoint).
Stop common PHP exposure mistakes (WordPress and classic PHP apps)
The expensive Nginx mistakes tend to be PHP-related. A common one is passing the wrong path info to PHP-FPM.
Another is letting arbitrary URLs get interpreted as executable scripts.
Use a safe WordPress-style front controller
Inside your server { }:
root /var/www/yourdomain.com/public;
index index.php index.html;
location / {
try_files $uri $uri/ /index.php?$args;
}
Lock PHP execution to real files only
Use this pattern for PHP-FPM:
location ~ \.php$ {
try_files $uri =404;
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php8.3-fpm.sock;
}
Why it matters: try_files $uri =404; prevents “/uploads/shell.php.jpg” style tricks from reaching PHP in sloppy configs.
Block access to hidden files and sensitive paths
location ~ /\.(?!well-known) {
deny all;
}
location ~* /(readme\.html|license\.txt) {
deny all;
}
location ~* /(wp-config\.php|composer\.(json|lock)) {
deny all;
}
Verification checklist
curl -I https://yourdomain.com/.envshould return 403/404curl -I https://yourdomain.com/wp-config.phpshould return 403/404- PHP pages still render, static assets still cache
Logging for security: keep what helps, rotate what hurts
Without logs, you end up guessing. With too much logging, bots can fill your disk faster than you’d expect.
Use a “security-focused” access log format (optional)
In http { }:
log_format security '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'rt=$request_time uct=$upstream_connect_time '
'uht=$upstream_header_time urt=$upstream_response_time';
Then in the vhost:
access_log /var/log/nginx/yourdomain.security.log security;
Make sure rotation is in place
Most distributions ship a logrotate rule for Nginx. Confirm it exists.
Then see what it will do:
sudo ls -l /etc/logrotate.d/nginx
sudo logrotate -d /etc/logrotate.d/nginx | sed -n '1,120p'
If you’ve dealt with “disk full” outages before, pair this with Linux VPS log rotation setup.
Track growth using the VPS disk space cleanup guide.
Add a basic WAF layer (ModSecurity 3) where it actually fits
A WAF isn’t required on every VPS. It can help on WordPress installs that attract constant exploit scans.
It also adds ongoing tuning work. Plan for that up front.
There are two practical approaches in 2026:
- Host-based WAF with ModSecurity 3 + CRS (more control, more tuning)
- Upstream WAF/CDN (less server load, fewer knobs; not covered here)
Ubuntu/Debian: install ModSecurity 3 and Nginx connector (package availability varies)
Some distributions ship ModSecurity 3 components. Others require compiling.
If you don’t want build maintenance, an upstream WAF or a managed hosting plan is usually a better fit.
If your repo provides packages, the flow typically looks like:
sudo apt update
sudo apt install -y libmodsecurity3 modsecurity-crs
Reality check: Nginx needs the ModSecurity connector module. Many admins use vendor builds or compile Nginx with the module.
If that’s not your lane, skip WAF and double down on rate limiting and patching.
Run CRS in detection-only first
Detection mode lowers the risk of false-positive outages.
Look for a config such as:
/etc/modsecurity/modsecurity.conf(engine and audit log)- CRS rules in something like
/usr/share/modsecurity-crs/
Set:
SecRuleEngine DetectionOnly
Verify that requests still work
Make a normal request and confirm you get 200s. Then watch the audit log for a day.
Only switch to blocking after you understand what it flags on your site.
Harden the network edge: firewall rules that match Nginx reality
If your firewall allows random inbound ports, they will get hit. Keep inbound open only for what you actually use.
That usually means SSH, HTTP, HTTPS, and any control panel ports you’ve explicitly chosen to expose.
If you haven’t built a clean ruleset yet, follow Linux VPS firewall setup with nftables.
At minimum, your inbound should look like:
- 22/tcp (SSH) from your IPs if possible
- 80/tcp (HTTP) and 443/tcp (HTTPS)
- Nothing else unless required
Verification: quick tests that catch most breakage
A clean reload isn’t proof. Run a few checks to confirm the behavior you intended.
1) Config and service health
sudo nginx -t
sudo systemctl status nginx --no-pager
sudo journalctl -u nginx -n 50 --no-pager
2) Header and TLS checks
curl -I https://yourdomain.com
curl -I http://yourdomain.com
If you enforce HTTPS redirects, HTTP should return a 301/308 to HTTPS.
3) Rate limit sanity
From a test machine, send a small burst:
for i in $(seq 1 60); do curl -s -o /dev/null -w "%{http_code}\n" https://yourdomain.com/wp-login.php; done | sort | uniq -c
You should see some 200/302 responses. If you pushed hard enough, you should also see some 429s.
4) Confirm sensitive files are blocked
curl -I https://yourdomain.com/.git/config
curl -I https://yourdomain.com/.env
curl -I https://yourdomain.com/wp-config.php
Common pitfalls (and how to recover fast)
- HSTS too early: Users get stuck if HTTPS is broken. Keep it commented until you’ve verified all subdomains, redirects, and certificates.
- Overly aggressive rate limiting: You block real customers behind NATs (offices, mobile carriers). Apply limits to login/admin endpoints, not the entire site.
- Wrong PHP socket path: On Ubuntu, PHP-FPM sockets look like
/run/php/php8.3-fpm.sock. Confirm withls /run/php/. - Breaking ACME challenges: If you use Let’s Encrypt HTTP-01, ensure
/.well-known/acme-challenge/stays reachable on port 80.
If you need a bigger troubleshooting flow (502s, random timeouts, slow pages), use this checklist: VPS hosting troubleshooting checklist.
Summary: a secure baseline you can keep
You now have a practical baseline: less banner leakage, tighter TLS, safe headers, targeted rate limits, and fewer PHP foot-guns.
What keeps it working is operational. Monitor the impact, rotate logs, and revisit limits as traffic changes.
If you run multiple sites, don’t skip the boring stuff. Keep tested backups, document your config layout, and leave yourself clean rollback points.
For stable performance and predictable security work, run these changes on a VPS with consistent I/O and enough RAM headroom.
If you’re upgrading or moving, HostMyCode VPS is a solid base. And managed VPS hosting is the simplest option if you want a second set of eyes on production hardening.
If your current server makes security work feel like defusing a bomb, move to a VPS plan you can control and measure. Start with a HostMyCode VPS for full root access, or choose managed VPS hosting if you want help keeping Nginx, PHP-FPM, and the OS baseline tight in 2026.
FAQ
Should I enable HSTS on my Nginx site?
Enable HSTS only after HTTPS works for every page and every required subdomain. Start with a shorter max-age (for example, a week), then increase once you’re confident.
Will rate limiting break real users?
It can if applied globally. Put limits on login, admin, and XML-RPC-style endpoints first. Watch access logs for 429 spikes from legitimate IP ranges.
Do I need ModSecurity on a small VPS?
Not always. If you can’t maintain the module and tune CRS rules, you’ll get more value from patching, strong TLS, and targeted rate limiting.
What’s the quickest way to validate my changes?
Run nginx -t, reload, then check headers with curl -I and verify sensitive paths return 403/404. Finish by scanning logs for unexpected 4xx/5xx bursts.