
Disk-full incidents on a VPS rarely start with your database. They usually start with logs: a noisy API, an auth brute-force attempt, or a sudden spike in upstream 5xx responses. The fix isn’t “delete /var/log/* and hope.” You need VPS log rotation best practices that put hard limits on growth, keep the history you’ll actually use, and avoid breaking services the next time rotation runs.
This is a practical playbook for 2026, aimed at developers and sysadmins running Linux VPS workloads: Nginx, systemd services, Docker containers, and the usual stack. You’ll get concrete config examples, verification commands, the mistakes that bite in production, and safe ways to roll back.
Why log rotation fails on real VPS workloads
On paper, logrotate + journald “just works.” In production, it breaks in familiar ways:
- Systemd-journald grows without a cap because persistent storage is enabled but limits aren’t set.
- Apps keep writing to renamed files because the process never received a reopen signal (classic “copytruncate vs. reload” problem).
- Compressed archives accumulate forever because retention is based on count, but rotation happens too often under load.
- Permissions break when logs are created by a service user and logrotate is expecting root ownership.
- Containers log to json-file until /var/lib/docker fills up, even if you rotate /var/log.
You’re not trying to rotate everything daily. You’re trying to keep the VPS stable, preserve enough history for incident response, and make the outcome predictable under load.
Prerequisites (what you need before you touch configs)
- A Linux VPS with systemd (Ubuntu 24.04/24.10, Debian 13, Rocky/AlmaLinux 10 are common in 2026).
- Root shell access (or sudo).
- At least 10 minutes of quiet time (don’t edit rotation rules mid-deploy).
If you’re still hardening the box, start with network controls. Your logs get a lot more useful once you’ve cut down obvious noise; this guide pairs well with UFW Firewall Setup for a VPS in 2026 and How to Harden Your Linux VPS for Production in 2026.
Step 1: Measure what’s actually consuming disk
Start with a baseline. You want to know whether the growth is coming from journald, classic files in /var/log, or container logs.
df -h /
sudo du -xh /var/log --max-depth=2 | sort -h | tail -n 25
sudo journalctl --disk-usage
Typical expected output for journald:
Archived and active journals take up 1.6G in the file system.
If you run Docker, check container logs too:
sudo du -xh /var/lib/docker/containers --max-depth=2 | sort -h | tail -n 15
Save these numbers. After changes, you’ll re-check and confirm rotation is doing what you think it’s doing.
Step 2 (VPS log rotation best practices): cap systemd-journald safely
On many VPS images in 2026, journald is persistent by default—or becomes persistent as soon as you create /var/log/journal. Persistent logs are helpful. Unlimited persistent logs are a slow-moving outage.
Edit:
sudo nano /etc/systemd/journald.conf
Set explicit limits (example values for a 40–80 GB root disk VPS):
[Journal]
Storage=persistent
SystemMaxUse=800M
SystemKeepFree=2G
SystemMaxFileSize=80M
SystemMaxFiles=12
MaxRetentionSec=14day
Compress=yes
Apply and verify:
sudo systemctl restart systemd-journald
sudo journalctl --disk-usage
What you want to see: usage stops climbing past the cap, and journald vacuums older entries as needed.
Pitfall: Don’t set SystemKeepFree absurdly high on small disks. On a 20 GB root partition, “KeepFree=10G” can turn into “journald can’t write,” which makes troubleshooting painful fast.
Rollback: Put the previous values back in /etc/systemd/journald.conf, restart journald, and (if you want to remove persistence) delete /var/log/journal and restart again. Remember: deleting the directory discards history.
Step 3: Tune logrotate defaults for predictable retention
Most distros already install logrotate and trigger it daily via systemd timers or cron. That doesn’t mean the defaults match your traffic.
Check the global config:
sudo sed -n '1,200p' /etc/logrotate.conf
A solid baseline that avoids “archives forever” looks like this:
# /etc/logrotate.conf
weekly
rotate 8
create
compress
delaycompress
dateext
su root adm
# Include per-package rules
include /etc/logrotate.d
Why weekly + rotate 8? It’s about two months of history without turning /var/log into an archive warehouse. For high-volume services you’ll override this with per-service rules (next steps).
Pitfall: On Ubuntu/Debian, the su directive matters when logs are group-owned by adm. Without it, logrotate can hit “permission denied,” skip files, and leave you thinking rotation is fine.
Verification: Do a dry run first, then force a run:
sudo logrotate -d /etc/logrotate.conf | tail -n 40
sudo logrotate -f /etc/logrotate.conf
After forcing rotation, confirm you see new .gz archives and fresh log files with the expected ownership and mode.
Step 4: Add a per-site Nginx rotation rule (without dropping logs)
Nginx usually ships with a working logrotate rule. The trouble starts when you add custom vhosts, nonstandard log paths, or extra access logs.
Scenario: you run an API at api.kestrel-labs.internal with logs in /var/log/nginx/api_kestrel_access.log and api_kestrel_error.log.
Create:
sudo nano /etc/logrotate.d/nginx-api-kestrel
Use postrotate to tell Nginx to reopen its log files (so you don’t fall into the copytruncate trap):
/var/log/nginx/api_kestrel_access.log /var/log/nginx/api_kestrel_error.log {
daily
rotate 14
missingok
notifempty
compress
delaycompress
dateext
sharedscripts
create 0640 www-data adm
postrotate
/usr/sbin/nginx -t && systemctl kill -s USR1 nginx
endscript
}
Why USR1? Nginx reopens log files on USR1. You get clean rotation without truncating a file that’s actively being written.
Verify:
sudo logrotate -d /etc/logrotate.d/nginx-api-kestrel | tail -n 60
sudo logrotate -f /etc/logrotate.d/nginx-api-kestrel
ls -lh /var/log/nginx | grep api_kestrel | head
Expected: files like api_kestrel_access.log-20260413.gz plus a fresh api_kestrel_access.log with the right permissions.
If your Nginx config also handles complex routing, keep logging consistent across upstreams. This complements Route Multiple Applications Using Nginx URL Paths.
Step 5: Rotate application logs for systemd services (the simple, reliable pattern)
For many 2026-era services, the cleanest setup is: log to stdout/stderr, let journald store it, and cap journald. Still, plenty of apps write to files—legacy frameworks, some Java stacks, and internal tools that grew up around tail -f.
Scenario: your Go service invoice-agent writes to /var/log/invoice-agent/app.log and supports SIGHUP to reopen logs.
Create a rule:
sudo mkdir -p /var/log/invoice-agent
sudo nano /etc/logrotate.d/invoice-agent
/var/log/invoice-agent/*.log {
size 50M
rotate 10
missingok
notifempty
compress
delaycompress
dateext
create 0640 invoice-agent invoice-agent
sharedscripts
postrotate
systemctl kill -s HUP invoice-agent.service 2>/dev/null || true
endscript
}
Why size-based rotation? It tracks reality. If you hit a spike, you rotate sooner; if the service is quiet, you don’t manufacture empty archives on a schedule.
Verification: Generate a burst and rotate:
sudo bash -c 'for i in {1..20000}; do echo "$(date -Is) test line $i" >> /var/log/invoice-agent/app.log; done'
sudo logrotate -f /etc/logrotate.d/invoice-agent
ls -lh /var/log/invoice-agent | head
Pitfall: If your app doesn’t reopen logs on HUP, it will keep writing to the renamed file. At that point you have three options: add reopen support, switch to journald, or use copytruncate as a last resort (and accept that you may lose lines during truncation).
Rollback: Remove /etc/logrotate.d/invoice-agent and restart the service. Existing archives remain; delete them later once you’re sure you don’t need them for compliance or debugging.
Step 6: Stop Docker container logs from filling the VPS
You can have perfect rotation in /var/log and still run out of disk because Docker’s default json-file logs keep growing under /var/lib/docker/containers.
Check the current driver and log options:
docker info --format '{{.LoggingDriver}}'
To cap json-file logs, create or edit:
sudo nano /etc/docker/daemon.json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "5"
}
}
Restart Docker (plan a maintenance window if you have stateful containers):
sudo systemctl restart docker
Verification: Pick a noisy container and inspect the log files:
sudo ls -lh /var/lib/docker/containers/*/*-json.log 2>/dev/null | head
Pitfall: These settings apply to new containers. Existing containers may keep the old behavior until you recreate them. With Compose, expect a redeploy.
Rollback: Restore the previous /etc/docker/daemon.json and restart Docker. If Docker won’t start because of JSON syntax, check:
sudo journalctl -u docker -n 50 --no-pager
Step 7: Make rotation observable (so you know it’s working)
Rotation failures are often quiet. You want to catch “logrotate hasn’t run in 12 days” long before the disk hits 100%.
On systemd-based distros, start with timers and service status:
systemctl list-timers | grep -E 'logrotate|fstrim' || true
systemctl status logrotate.service --no-pager || true
Then add lightweight monitoring:
- Alert when root disk usage exceeds 80–85% for 10+ minutes.
- Alert when
/var/loggrows unusually fast. - Alert when journald usage exceeds your cap (which indicates caps aren’t applying).
If you need to understand the spike that triggered a log flood, pair this with Linux VPS monitoring with eBPF. It’s a fast way to confirm whether the storm came from an app bug, a retry loop, or a network problem.
Common pitfalls (and how to avoid them)
- Using copytruncate by default: It’s tempting, but it can drop lines under load. Prefer service reload/reopen signals (USR1 for Nginx, HUP for many daemons).
- Rotating symlinked logs: Some apps symlink
app.logto a dated file. Rotate the real path, not the symlink target. - Wrong ownership after rotation: If the app can’t write new logs, it may crash or spam stderr. Use
createwith the correct user/group. - Too many tiny archives: Hourly rotation without a clear reason burns inodes and makes basic ops (like
lsanddu) slower. Prefer size-based rules for noisy logs. - Ignoring backup implications: If you back up /var/log, archives can dominate your backups. Decide what you actually need to keep.
If you already back up your VPS, align retention and excludes with your log strategy. This connects directly to VPS Backup Strategy 2026: Restic + S3 and VPS Disaster Recovery Planning in 2026.
Verification checklist (run this after you implement changes)
- Journald is capped:
journalctl --disk-usagestays under your target. - Logrotate has no errors:
sudo logrotate -d /etc/logrotate.confshows no permission failures. - Nginx reopened logs: after forcing rotation, Nginx continues writing to the new
.logfiles. - App logs continue normally: the service can write to newly created logs and doesn’t keep writing to dated archives.
- Container logs are capped: json logs don’t exceed max-size * max-file per container.
A quick live test for Nginx:
curl -I http://127.0.0.1/ 2>/dev/null | head -n 1
sudo tail -n 3 /var/log/nginx/api_kestrel_access.log
Rollback plan (so you can undo changes quickly)
Logging changes often fail later—at the next rotation event—when you’re not watching. A rollback plan keeps you from debugging under pressure.
- Snapshot the current config files:
sudo mkdir -p /root/logging-rollbacks sudo cp -a /etc/systemd/journald.conf /root/logging-rollbacks/journald.conf.bak sudo cp -a /etc/logrotate.conf /root/logging-rollbacks/logrotate.conf.bak sudo cp -a /etc/logrotate.d /root/logging-rollbacks/logrotate.d.bak - If journald breaks: restore
/etc/systemd/journald.conf, restartsystemd-journald, confirm logs return. - If a service stops logging to files: check ownership on the new file and restore the previous logrotate rule for that service.
- If Docker won’t start: validate JSON, restore the previous
/etc/docker/daemon.json, restart.
Also keep a “get me breathing” set of commands nearby for incidents:
sudo journalctl --vacuum-size=200M
sudo find /var/log -type f -name '*.gz' -size +200M -print
Use vacuuming with care. It can save a crashed VPS, but it can also delete evidence you’ll want five minutes later.
Where HostMyCode fits (and why it matters for logs)
Log rotation gets easier when your disk and I/O aren’t already pinned. If you’re running an API, a CI runner, or a multi-service box, predictable resources reduce the “why did this spike?” moments that often end with a log flood.
For production workloads, consider a HostMyCode VPS for clean isolation and consistent storage performance. If you’d rather not babysit timers, permissions, and service restarts, managed VPS hosting is the pragmatic option.
If you’re seeing repeated disk alerts caused by logs, you usually need two things: sane caps and predictable resources. Start with a HostMyCode VPS, then move to managed VPS hosting if you want patching, monitoring, and day-to-day guardrails handled for you.
FAQ
Should I keep logs in journald or write to /var/log files?
If your stack logs cleanly to stdout/stderr, journald keeps things simple: one store, easy querying with journalctl, and a clear cap. File logs still make sense for apps that expect them, or for tools and workflows built around tailing files.
What retention should I set for a small VPS?
For a typical 1–2 vCPU VPS with a 40–80 GB root disk, keeping journald under 500–1000 MB and rotating key service logs for 2–8 weeks is a practical starting point. If you’re in a regulated environment, follow policy and ship logs off-host.
Why does logrotate show success but my app still writes to the old file?
The process kept the file descriptor open. Fix it by sending the app a reload/reopen signal in postrotate (USR1 for Nginx, HUP for many services). Use copytruncate only if you can’t reopen logs safely.
How do I prevent backups from being dominated by rotated logs?
Exclude large archives (for example /var/log/*.gz and container json logs) unless you truly need them. Keep smaller on-host retention, and ship important logs to a central store if required.
Next steps
- Ship critical logs off the VPS: once rotation is stable, consider a central log system (even a lightweight one) so a disk failure doesn’t erase your timeline.
- Add alerts for rotation failures: a weekly check for “last rotated” timestamps can prevent silent regressions after package upgrades.
- Revisit security logging: if you care about change tracking, pair file rotation with targeted auditing. Linux auditd log monitoring on a VPS (2026) is a solid next read.
Summary
Good log rotation is boring on purpose. Cap journald, rotate high-volume services with reopen signals, and rein in container logs. Then verify it, watch it, and keep a rollback plan handy. If you want a stable base for these operational basics, start with a HostMyCode VPS and scale into managed VPS hosting once you’d rather spend time on the app than on the disk usage graph.