
You usually notice logs only after something breaks: a 502 spike, a brute-force run, or a disk filling at 3 a.m. Linux VPS log shipping with rsyslog prevents a quieter (and often pricier) failure. Logs end up scattered across servers, rotated away, and hard to line up during an incident.
This walkthrough shows how to set up a small, production-friendly central syslog pipeline. You’ll run one “collector” VPS that receives logs over TLS. You’ll also configure one or more client VPS nodes to forward system logs, SSH/auth events, and Nginx access/error logs.
Along the way, you’ll add filtering, basic disk protection, and verification steps. The goal is simple: you can trust what you’re collecting.
What you’ll build (and why it matters)
- Collector VPS: rsyslog listens on TCP/6514 (syslog over TLS) and writes logs per-host to disk.
- Client VPS nodes: rsyslog forwards selected facilities/files (auth, syslog, Nginx access/error) to the collector.
- Security: TLS with a small private CA, firewall rules, and hostname-based file separation.
- Ops basics: rate limiting, queueing, rotation, and a simple “is it working?” checklist.
If you manage customer sites, reseller fleets, or multi-server WordPress stacks, a central collector cuts the time you spend guessing which box saw what.
If you’d rather not own OS patching and baseline hardening, managed VPS hosting can handle that while you stay focused on applications and clients.
Prerequisites
- Two Linux servers: 1 collector + 1 client (repeat for more clients).
- Ubuntu 24.04 LTS or Debian 12 on both ends (commands below target these). AlmaLinux/Rocky notes included where it matters.
- Root or sudo access.
- A private network is nice, but not required. If you go over the public Internet, TLS is mandatory.
Plan your log collector sizing (quick rule-of-thumb)
Pick your retention window before you touch configs. Log volume varies a lot, but these are reasonable starting points for typical web nodes in 2026:
- Small site VPS: 200MB–1GB/day (often dominated by Nginx access logs during spikes).
- Busy WordPress/WooCommerce node: 1–5GB/day thanks to bots, WAF logs, and noisy PHP warnings.
- API node with high RPS: 5GB+/day if you log full request/response metadata.
Estimate nodes × retention days × 1.3 for headroom. If you’re unsure, start with 7–14 days. Then adjust after you’ve measured real usage.
In high-traffic environments, logging and performance tuning usually move together. See VPS performance tuning for high-traffic sites.
Step 1: Provision the collector VPS and lock down the basics
Choose a VPS with enough disk and predictable I/O. For centralized logs, steady storage performance matters more than peak CPU.
A HostMyCode VPS works well here because you can scale disk and memory without migrating the rest of your stack.
Install rsyslog and helpers
sudo apt update
sudo apt install -y rsyslog rsyslog-gnutls logrotate
sudo systemctl enable --now rsyslog
Open only the port you need
We’ll receive syslog over TLS on TCP/6514.
sudo ufw allow 6514/tcp
sudo ufw status
If you manage rules with nftables, allow TCP/6514 only from your client IP ranges.
Step 2: Create a small private CA and certs for TLS syslog
rsyslog uses GnuTLS for TLS support. You’ll create a private CA first. Then you’ll issue one server certificate for the collector and a client certificate for each node.
This gives you encryption and real authentication. It’s not “TLS but anyone can connect.”
On the collector, make a directory for PKI material:
sudo mkdir -p /etc/rsyslog/pki
sudo chmod 700 /etc/rsyslog/pki
cd /etc/rsyslog/pki
Create the CA
sudo openssl genrsa -out ca.key 4096
sudo openssl req -x509 -new -nodes -key ca.key -sha256 -days 3650 \
-subj "/C=US/ST=NA/L=NA/O=HostMyCode-Logs/OU=CA/CN=hostmycode-log-ca" \
-out ca.crt
Create the collector server certificate
Use the collector’s hostname as CN and add a SAN. Replace values to match your DNS.
COLLECTOR_FQDN="logs.example.com"
sudo openssl genrsa -out server.key 4096
sudo openssl req -new -key server.key \
-subj "/C=US/O=HostMyCode-Logs/OU=Server/CN=${COLLECTOR_FQDN}" \
-out server.csr
cat <<'EOF' | sudo tee server.ext
subjectAltName = DNS:logs.example.com
extendedKeyUsage = serverAuth
EOF
sudo openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial \
-out server.crt -days 825 -sha256 -extfile server.ext
Create a client certificate (repeat per VPS)
Name the cert after the client host. This small detail pays off when you troubleshoot.
CLIENT_NAME="web-01"
sudo openssl genrsa -out ${CLIENT_NAME}.key 4096
sudo openssl req -new -key ${CLIENT_NAME}.key \
-subj "/C=US/O=HostMyCode-Logs/OU=Client/CN=${CLIENT_NAME}" \
-out ${CLIENT_NAME}.csr
cat <<EOF | sudo tee ${CLIENT_NAME}.ext
extendedKeyUsage = clientAuth
EOF
sudo openssl x509 -req -in ${CLIENT_NAME}.csr -CA ca.crt -CAkey ca.key -CAcreateserial \
-out ${CLIENT_NAME}.crt -days 825 -sha256 -extfile ${CLIENT_NAME}.ext
Security note: treat private keys like credentials. Don’t email them. Move them with scp over SSH or your normal secrets workflow.
If you already rotate secrets, the approach in Linux VPS secrets rotation with sops + age fits well.
Step 3: Configure the collector to receive syslog over TLS
On Ubuntu/Debian, place a dedicated receiver config in /etc/rsyslog.d/. It’s easier to audit later.
sudo tee /etc/rsyslog.d/10-tls-receiver.conf > /dev/null <<'EOF'
# TLS receiver on TCP/6514
module(load="imtcp")
module(load="gtls")
# Where certs live
global(
DefaultNetstreamDriver="gtls"
DefaultNetstreamDriverCAFile="/etc/rsyslog/pki/ca.crt"
DefaultNetstreamDriverCertFile="/etc/rsyslog/pki/server.crt"
DefaultNetstreamDriverKeyFile="/etc/rsyslog/pki/server.key"
)
# Require client certs
$InputTCPServerStreamDriverMode 1
$InputTCPServerStreamDriverAuthMode x509/name
# Accept any valid client cert signed by our CA
# (You can tighten this to specific CNs later.)
$InputTCPServerStreamDriverPermittedPeer *
# Start listener
input(type="imtcp" port="6514")
# Write logs per host to reduce collisions
template(name="PerHostFile" type="string"
string="/var/log/remote/%HOSTNAME%/%PROGRAMNAME%.log")
# Create directories on the fly, and keep file permissions sane
action(type="omfile" DynaFile="PerHostFile" CreateDirs="on" DirCreateMode="0750" FileCreateMode="0640")
EOF
Make sure the PKI files are in place and permissions are tight (on the collector):
sudo cp /etc/rsyslog/pki/ca.crt /etc/rsyslog/pki/ca.crt 2>/dev/null || true
sudo chown -R root:root /etc/rsyslog/pki
sudo chmod 600 /etc/rsyslog/pki/server.key
sudo chmod 644 /etc/rsyslog/pki/server.crt /etc/rsyslog/pki/ca.crt
Create the base directory for remote logs:
sudo mkdir -p /var/log/remote
sudo chown syslog:adm /var/log/remote
sudo chmod 0750 /var/log/remote
Restart and confirm rsyslog parses the config cleanly:
sudo rsyslogd -N1
sudo systemctl restart rsyslog
sudo systemctl status rsyslog --no-pager
Collector-side quick connectivity check
You’ll do an end-to-end test from a client with logger in a moment. For now, confirm the port is listening:
sudo ss -lntp | grep 6514 || true
Step 4: Configure a client VPS to forward logs over TLS
On the client VPS (Ubuntu 24.04 / Debian 12):
sudo apt update
sudo apt install -y rsyslog rsyslog-gnutls
Copy the CA + client certs to the client
From your workstation (or from the collector), copy:
ca.crtweb-01.crtandweb-01.key(or your chosen client name)
Place them on the client under /etc/rsyslog/pki/:
sudo mkdir -p /etc/rsyslog/pki
sudo chmod 700 /etc/rsyslog/pki
sudo cp ca.crt /etc/rsyslog/pki/ca.crt
sudo cp web-01.crt /etc/rsyslog/pki/client.crt
sudo cp web-01.key /etc/rsyslog/pki/client.key
sudo chown -R root:root /etc/rsyslog/pki
sudo chmod 600 /etc/rsyslog/pki/client.key
sudo chmod 644 /etc/rsyslog/pki/client.crt /etc/rsyslog/pki/ca.crt
Add a forwarding config
Create /etc/rsyslog.d/60-forward-to-collector.conf. Replace logs.example.com with your collector hostname.
COLLECTOR="logs.example.com"
sudo tee /etc/rsyslog.d/60-forward-to-collector.conf > /dev/null <<EOF
module(load="gtls")
global(
DefaultNetstreamDriver="gtls"
DefaultNetstreamDriverCAFile="/etc/rsyslog/pki/ca.crt"
DefaultNetstreamDriverCertFile="/etc/rsyslog/pki/client.crt"
DefaultNetstreamDriverKeyFile="/etc/rsyslog/pki/client.key"
)
# Use a disk-backed queue so brief outages don't drop logs
action(
type="omfwd"
target="${COLLECTOR}"
port="6514"
protocol="tcp"
StreamDriver="gtls"
StreamDriverMode="1"
StreamDriverAuthMode="x509/name"
StreamDriverPermittedPeers="logs.example.com"
action.resumeRetryCount="-1"
queue.type="LinkedList"
queue.filename="fwdToCollector"
queue.maxdiskspace="2g"
queue.saveonshutdown="on"
)
EOF
Validate and restart:
sudo rsyslogd -N1
sudo systemctl restart rsyslog
Send a test message
logger -p authpriv.notice "rsyslog TLS forward test from $(hostname -f)"
On the collector, confirm a file appears and contains the message:
sudo tail -n 50 /var/log/remote/web-01/* 2>/dev/null | tail -n 10
Step 5: Forward Nginx access/error logs cleanly (without duplicating everything)
On many distros, system logs already flow through syslog/journald. Nginx is different. It often writes straight to /var/log/nginx/access.log and /var/log/nginx/error.log.
You have two common approaches:
- Option A (simple): keep Nginx logs as files and have rsyslog “tail” them with
imfile. - Option B (more integrated): switch Nginx to syslog output (fine in some setups, but file logging is still standard).
This tutorial uses Option A. It’s predictable and easy to debug.
Enable rsyslog imfile on the client
sudo tee /etc/rsyslog.d/20-imfile-nginx.conf > /dev/null <<'EOF'
module(load="imfile")
# Nginx access log
input(type="imfile"
File="/var/log/nginx/access.log"
Tag="nginx_access:"
Severity="info"
Facility="local6"
)
# Nginx error log
input(type="imfile"
File="/var/log/nginx/error.log"
Tag="nginx_error:"
Severity="warning"
Facility="local6"
)
EOF
Restart rsyslog:
sudo systemctl restart rsyslog
Verify on the collector: check the per-program files for that host:
sudo ls -la /var/log/remote/web-01/
sudo tail -n 20 /var/log/remote/web-01/nginx_access.log
sudo tail -n 20 /var/log/remote/web-01/nginx_error.log
If the same server also runs a reverse proxy, keep your TLS decisions consistent across layers.
For a refresher on Nginx proxy TLS basics, see Nginx SSL reverse proxy with Let's Encrypt (the Nginx concepts still apply in 2026; adjust OS versions as needed).
Step 6: Filter what you ship (reduce noise, keep the useful bits)
Forwarding everything sounds convenient. It gets expensive fast, and it slows incident response when you’re scanning junk.
A good baseline is:
- Keep authpriv (SSH logins, sudo, PAM events).
- Keep kernel warnings/errors (handy for I/O and network issues).
- Keep Nginx error always; keep access logs if you need traffic forensics.
On the client, attach forwarding actions only to the facilities you care about. The cleanest pattern is to keep forwarding actions in one rules file. Then add conditions above them.
Example: ship auth + nginx, keep everything else local
Edit your forwarding config to add conditions above the action(type="omfwd" ...). For example, in /etc/rsyslog.d/60-forward-to-collector.conf:
# Forward only authpriv + local6 (nginx via imfile) to the collector
if ($syslogfacility-text == 'authpriv') then {
action(type="omfwd" target="logs.example.com" port="6514" protocol="tcp"
StreamDriver="gtls" StreamDriverMode="1" StreamDriverAuthMode="x509/name"
StreamDriverPermittedPeers="logs.example.com" action.resumeRetryCount="-1"
queue.type="LinkedList" queue.filename="fwdAuth" queue.maxdiskspace="1g" queue.saveonshutdown="on")
stop
}
if ($syslogfacility-text == 'local6') then {
action(type="omfwd" target="logs.example.com" port="6514" protocol="tcp"
StreamDriver="gtls" StreamDriverMode="1" StreamDriverAuthMode="x509/name"
StreamDriverPermittedPeers="logs.example.com" action.resumeRetryCount="-1"
queue.type="LinkedList" queue.filename="fwdNginx" queue.maxdiskspace="2g" queue.saveonshutdown="on")
stop
}
Restart rsyslog. Then retest with logger and by generating an Nginx request.
Step 7: Protect the collector from disk exhaustion
A central log box can fail simply because it’s doing its job. Put two guardrails in place early: rotation and retention.
Rotate remote logs with logrotate
Create a logrotate rule on the collector for /var/log/remote:
sudo tee /etc/logrotate.d/remote-rsyslog > /dev/null <<'EOF'
/var/log/remote/*/*.log {
daily
rotate 14
missingok
notifempty
compress
delaycompress
dateext
maxage 14
su syslog adm
create 0640 syslog adm
sharedscripts
postrotate
/usr/lib/rsyslog/rsyslog-rotate || true
endscript
}
EOF
Dry-run it:
sudo logrotate -d /etc/logrotate.d/remote-rsyslog | tail -n 40
Add rsyslog rate limiting on the collector
Rate limiting helps during bursts (hello, bot floods). It keeps the collector from starving.
sudo tee /etc/rsyslog.d/05-rate-limit.conf > /dev/null <<'EOF'
# Basic rate limiting (tune per environment)
module(load="imuxsock" SysSock.RateLimit.Interval="2" SysSock.RateLimit.Burst="5000")
EOF
sudo rsyslogd -N1
sudo systemctl restart rsyslog
Step 8: Troubleshoot shipping failures fast (diagnostic checklist)
When forwarding stops, you want a short path to the cause. Run these checks in order.
1) Confirm network reachability
On the client:
getent hosts logs.example.com
nc -vz logs.example.com 6514
2) Confirm TLS and permitted peer name
If you set StreamDriverPermittedPeers="logs.example.com" but your server cert SAN doesn’t match, the handshake fails. It can look like “nothing happens” unless you read rsyslog’s logs.
On the client:
sudo journalctl -u rsyslog -n 200 --no-pager
Search for GnuTLS or handshake errors. Fix it by regenerating the server cert with the correct SANs or adjusting StreamDriverPermittedPeers.
3) Validate rsyslog configs
sudo rsyslogd -N1
This catches the easy stuff: typos, missing modules, and broken syntax that still allows a service restart.
4) Check the forwarding queue
If the collector is down, you should see queue files under /var/spool/rsyslog/ (exact location varies by config). On the client:
sudo ls -la /var/spool/rsyslog 2>/dev/null || true
sudo du -sh /var/spool/rsyslog 2>/dev/null || true
5) Confirm the collector is writing per-host logs
sudo find /var/log/remote -maxdepth 2 -type f -name '*.log' | head
If you want to tie this into broader monitoring (CPU, RAM, disk latency, network), pair it with Linux VPS performance monitoring in 2026.
Step 9: Common pitfalls (and how to avoid them)
- Wrong hostname in TLS checks: the collector’s cert SAN must match the exact name in
StreamDriverPermittedPeers. - Forgetting to install rsyslog-gnutls: without it, TLS modules fail to load.
- Permission issues on key files: private keys should be
0600. rsyslog won’t always tell you clearly when it can’t read them. - Duplicate Nginx logs: if you forward both syslog and imfile, you’ll ingest the same events twice. Pick one path.
- Collector disk fills: set rotation early. Also set alerts for disk usage on the collector.
Step 10: Expand to multiple servers (simple conventions that scale)
Once one client is clean, adding 10–100 nodes is mostly about consistency:
- Issue a unique client cert per node (CN = hostname).
- Keep the same collector hostname everywhere (avoid IPs in configs if you can).
- Use a consistent facility for app logs (for example,
local6for Nginx,local5for a custom app). - Use
/var/log/remote/%HOSTNAME%/on the collector so each host stays isolated.
If you manage a lot of customer VPS instances, this is also where managed ops can save real time.
With HostMyCode managed VPS hosting, you can standardize baselines and keep secure updates moving without changing your logging design.
Summary: a reliable baseline for centralized logs on VPS
Central syslog won’t replace a full observability stack. It does close a common operational gap.
You get retained, searchable logs even when a node dies or rotates local files away. With TLS, per-host file separation, disk-backed queues, and rotation, this setup stays steady under normal production load.
If you’re doing this for client sites or high-traffic services, start with a dedicated collector. Scale storage as retention needs change.
A HostMyCode VPS is a practical collector choice because you can scale it independently from your web/app nodes.
Want a clean place to run your collector and web nodes without overspending? Use a HostMyCode VPS for the log collector, and move production sites to managed VPS hosting if you’d rather hand off patching and baseline server maintenance.
FAQ
Should I use UDP/514 instead of TLS on 6514?
Use TCP/6514 with TLS in 2026 unless you have a private, isolated network and a strong reason to accept packet loss. UDP syslog drops messages under load and has no built-in encryption.
Can I ship journald logs directly?
Yes. Many distros already forward journald to rsyslog. For deeper journald-centric setups you can use additional inputs, but rsyslog forwarding is a solid baseline.
How do I prevent the collector from becoming a single point of failure?
Start with disk-backed queues on clients (covered above). If you need higher availability, run a second collector and forward to both, or implement DNS-based failover with verification and rollback.
Is this enough for compliance logging?
It’s a baseline. For compliance, you typically add tamper-evident storage, stricter access controls, and clear retention policies. Centralizing logs is still step one.
What about WordPress-specific logs?
WordPress itself logs mostly through PHP and the web server. If you’re tuning a WordPress stack, align logging with performance and security needs; see WordPress hosting performance optimization.