
Most “logging setups” fall apart the first time you need to prove anything: who logged in, what changed, and whether someone touched the evidence. Linux VPS compliance logging has very little to do with dashboards. It’s about controls you can defend under pressure—centralization, encryption in transit, sane retention, and an audit trail you can replay.
This is a practical guide for developers and sysadmins running one or more VPS instances who want audit-friendly logs without turning the fleet into a research project. The stack stays intentionally conservative: rsyslog, TLS, clear per-service tagging, and a retention policy you can explain to a reviewer without hand-waving.
Scenario: two app servers, one log collector, and a real audit question
Assume you run a small SaaS API on two VPS nodes (api-1, api-2) behind a reverse proxy. Then you get the kind of question that sounds simple until you have to answer it:
- “Can you prove no one edited the auth logs on the server after the incident?”
- “Can you show all sudo activity for the last 90 days?”
- “Can you separate app logs from system logs and keep them for different durations?”
If your plan is “SSH in and grep /var/log,” you don’t have compliance logging. You have wishful thinking.
Prerequisites (what you should have before you touch logging)
- At least 2 VPS: one as a log collector, one or more as clients.
- DNS or stable IPs for collector and clients (hostnames help for certs).
- Time sync enabled (
systemd-timesyncdorchrony). Log timelines are useless without reliable clocks. - Root or sudo access on all nodes.
Hosting note: the collector benefits from predictable disk and CPU. If you’re centralizing logs from multiple servers, start with a small dedicated logging box on a HostMyCode VPS, then adjust storage once your retention and volume are real numbers instead of guesses.
Why centralization beats “just ship to a SaaS” for many teams
Managed log platforms can be great. They’re also not always the right first step—especially if you’re trying to satisfy an auditor, not build a pretty dashboard.
Teams in regulated environments often need:
- Clear data residency (where logs live, who can access them).
- Deterministic retention (90/180/365 days, per log class).
- Minimal moving parts (fewer dependencies during incidents).
A central rsyslog collector gives you a baseline that still works when a deploy goes sideways or you’re running incident response off a terminal. If you want a broader runbook for that moment, match this with your internal process; HostMyCode also has a solid reference on incident response triage and containment.
Linux VPS compliance logging design (simple and defensible)
This is the pattern that tends to pass audits without drama:
- Clients forward logs to a collector over TLS.
- The collector stores logs to disk using host + program + date paths.
- Access controls: only a small group can read logs; even fewer can delete.
- Retention enforced by policy (logrotate or filesystem lifecycle scripts).
- Verification documented: you can show forwarding is active and logs are intact.
It’s not flashy. It’s evidence.
Step-by-step: set up the collector (rsyslog + TLS + structured file layout)
This section stays hands-on because “the concept” won’t help at 2 a.m. The example uses Debian 12/Ubuntu 24.04 LTS-style paths, but the rsyslog pieces map cleanly to RHEL/AlmaLinux too.
1) Install rsyslog and enable the TLS input
sudo apt update
sudo apt install -y rsyslog rsyslog-gnutls
Confirm rsyslog is running:
systemctl status rsyslog --no-pager
Expected output includes active (running).
2) Create a dedicated directory layout for compliance logs
Use a predictable path so backups and retention rules stay simple:
sudo mkdir -p /var/log/central
sudo chown -R syslog:adm /var/log/central
sudo chmod 2750 /var/log/central
The 2750 keeps logs private and preserves the group on newly created files.
3) Generate a small private CA and server certificate
For internal log transport, a private CA is usually the cleanest option. If you already have an org PKI, use it. If not, a local CA works fine for small fleets.
Install openssl if needed:
sudo apt install -y openssl
Create a CA (store keys with care):
sudo install -d -m 0700 /etc/rsyslog/tls
cd /etc/rsyslog/tls
sudo openssl genrsa -out ca.key 4096
sudo openssl req -x509 -new -nodes -key ca.key -sha256 -days 3650 \
-subj "/C=US/ST=NA/L=NA/O=HostMyCode-Labs/OU=Logging-CA/CN=hmclogs-ca" \
-out ca.crt
Create a server key + certificate request (CN = collector hostname):
sudo openssl genrsa -out collector.key 4096
sudo openssl req -new -key collector.key \
-subj "/C=US/ST=NA/L=NA/O=HostMyCode-Labs/OU=Logging/CN=log-collector.internal" \
-out collector.csr
Sign it:
sudo openssl x509 -req -in collector.csr -CA ca.crt -CAkey ca.key -CAcreateserial \
-out collector.crt -days 825 -sha256
Lock down permissions:
sudo chown root:root /etc/rsyslog/tls/*
sudo chmod 0600 /etc/rsyslog/tls/*.key
sudo chmod 0644 /etc/rsyslog/tls/*.crt
4) Configure rsyslog to listen on a dedicated TLS port
Pick a non-default port to avoid collisions and “mystery defaults.” We’ll use 6516/tcp (6514 is common; 6516 stays out of the way in a lot of environments).
Create /etc/rsyslog.d/20-central-tls.conf:
# /etc/rsyslog.d/20-central-tls.conf
module(load="imtcp")
module(load="gtls")
# TLS parameters
global(
DefaultNetstreamDriver="gtls"
DefaultNetstreamDriverCAFile="/etc/rsyslog/tls/ca.crt"
DefaultNetstreamDriverCertFile="/etc/rsyslog/tls/collector.crt"
DefaultNetstreamDriverKeyFile="/etc/rsyslog/tls/collector.key"
)
# Listen for TLS syslog
input(
type="imtcp"
port="6516"
StreamDriver="gtls"
StreamDriverMode="1" # TLS-only
StreamDriverAuthMode="anon" # Use x509 for encryption; add client auth later if needed
)
# Template: /var/log/central/HOST/PROGRAM/YYYY-MM-DD.log
template(name="CentralFile" type="string"
string="/var/log/central/%HOSTNAME%/%programname%/%$YEAR%-%$MONTH%-%$DAY%.log"
)
# Write everything received on TCP/TLS to the central layout
*.* ?CentralFile
& stop
Enable directory creation so rsyslog can build the template path on demand:
sudo tee /etc/rsyslog.d/05-dircreate.conf >/dev/null <<'EOF'
# Ensure rsyslog can create directories for the template path
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
$CreateDirs on
$DirCreateMode 0750
$FileCreateMode 0640
$Umask 0027
EOF
Validate config and restart:
sudo rsyslogd -N1
sudo systemctl restart rsyslog
rsyslogd -N1 should exit with OK. If it doesn’t, fix syntax first, then restart.
5) Open the collector firewall for 6516/tcp
If you use nftables/ufw, make the rule explicit. Example with ufw:
sudo ufw allow 6516/tcp comment 'syslog over TLS'
sudo ufw status
If you’re standardizing firewall policy, HostMyCode’s guide on nftables firewall logging with safe rollbacks pairs well with this setup.
Step-by-step: configure a client to forward logs over TLS
On each app VPS, install rsyslog-gnutls, trust the collector CA, then forward the log classes you actually care about. “Forward everything” sounds thorough; it usually just burns disk and attention.
1) Install rsyslog TLS support on the client
sudo apt update
sudo apt install -y rsyslog rsyslog-gnutls
2) Copy the CA certificate to the client
From the collector, copy /etc/rsyslog/tls/ca.crt to the client (scp, config management, or a secrets tool all work). Place it at /etc/rsyslog/tls/ca.crt:
sudo install -d -m 0755 /etc/rsyslog/tls
sudo cp /tmp/ca.crt /etc/rsyslog/tls/ca.crt
sudo chmod 0644 /etc/rsyslog/tls/ca.crt
3) Configure forwarding rules with tags that help later
Create /etc/rsyslog.d/90-forward-to-collector.conf:
# /etc/rsyslog.d/90-forward-to-collector.conf
module(load="omfwd")
module(load="gtls")
global(
DefaultNetstreamDriver="gtls"
DefaultNetstreamDriverCAFile="/etc/rsyslog/tls/ca.crt"
)
# Add an environment tag to every forwarded message (helps split prod/stage)
set $.env = "prod";
# Forward auth and sudo-related logs
if ($syslogfacility-text == "auth") then {
action(
type="omfwd"
target="log-collector.internal"
port="6516"
protocol="tcp"
StreamDriver="gtls"
StreamDriverMode="1"
StreamDriverAuthMode="anon"
Template="RSYSLOG_SyslogProtocol23Format"
)
stop
}
# Forward your app logs if they use local0
if ($syslogfacility-text == "local0") then {
action(
type="omfwd"
target="log-collector.internal"
port="6516"
protocol="tcp"
StreamDriver="gtls"
StreamDriverMode="1"
StreamDriverAuthMode="anon"
Template="RSYSLOG_SyslogProtocol23Format"
)
stop
}
This forwards:
- auth facility (SSH, sudo, PAM events)
- local0 for your application (you’ll configure your app to log to syslog local0)
Validate and restart:
sudo rsyslogd -N1
sudo systemctl restart rsyslog
4) Send a test message and verify it lands on the collector
On the client:
logger -p local0.info "audit-test: api-1 forwarding works"
On the collector:
sudo find /var/log/central -type f -name "*.log" -mtime -1 | head
sudo grep -R "audit-test" /var/log/central 2>/dev/null | head
Expected output includes a line with audit-test under a path like:
/var/log/central/api-1/local0/2026-04-17.log:audit-test: api-1 forwarding works
Make logs meaningful: per-service tagging and consistent facilities
Centralization gets you a single place to look. The payoff comes later, when you need to answer a question quickly and confidently.
Two conventions do most of the work:
- Use syslog facilities consistently (e.g.,
local0for API,local1for workers,local2for cron/batch). - Include a service identifier in every message, even if your app already emits JSON.
If you run a Node.js API under systemd, you can route stdout/stderr into journald and forward via rsyslog, or log directly via a syslog transport. Either way, keep secrets out of log lines. That’s mostly process and config hygiene; HostMyCode’s guide to shipping apps without .env leaks using sops + age is a useful companion to any compliance logging effort.
Retention and rotation: keep what you need, delete what you must
Compliance logging isn’t “store forever.” It’s “store long enough, then delete on purpose.” A common baseline in 2026:
- auth/sudo: 180 days
- application logs: 30–90 days depending on volume and business needs
- security alerts: 365 days (if volume is low)
Because files are already split by day, “rotation” is mostly a retention job. A systemd timer keeps it auditable and easy to check, which beats a half-forgotten cron line during an incident.
Create /usr/local/sbin/central-log-retention.sh on the collector:
sudo tee /usr/local/sbin/central-log-retention.sh >/dev/null <<'EOF'
#!/usr/bin/env bash
set -euo pipefail
BASE="/var/log/central"
# Delete application facilities older than 60 days
find "$BASE" -type f -path "*/local0/*" -name "*.log" -mtime +60 -print -delete
# Delete auth facility files older than 180 days
find "$BASE" -type f -path "*/auth/*" -name "*.log" -mtime +180 -print -delete
# Remove empty directories
find "$BASE" -type d -empty -print -delete
EOF
sudo chmod 0750 /usr/local/sbin/central-log-retention.sh
sudo chown root:adm /usr/local/sbin/central-log-retention.sh
Create a systemd service and timer:
sudo tee /etc/systemd/system/central-log-retention.service >/dev/null <<'EOF'
[Unit]
Description=Central log retention cleanup
[Service]
Type=oneshot
ExecStart=/usr/local/sbin/central-log-retention.sh
EOF
sudo tee /etc/systemd/system/central-log-retention.timer >/dev/null <<'EOF'
[Unit]
Description=Run central log retention daily
[Timer]
OnCalendar=daily
Persistent=true
[Install]
WantedBy=timers.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable --now central-log-retention.timer
systemctl list-timers --all | grep central-log-retention
That last command should show the next run time.
Verification checks you can run during audits (and during incidents)
Put these checks in your ops notes. They’re quick, unambiguous, and they replace “pretty sure” with proof.
- Collector is listening:
sudo ss -lntp | grep 6516
Expected output includes LISTEN and rsyslogd.
- TLS handshake works from a client (run on the client):
openssl s_client -connect log-collector.internal:6516 -CAfile /etc/rsyslog/tls/ca.crt -quiet </dev/null
You should see Verify return code: 0 (ok) in the output.
- Log volume is sane (collector):
sudo du -sh /var/log/central/* 2>/dev/null | sort -h | tail
- Recent auth events are centralized (collector):
sudo grep -R "sudo" /var/log/central/*/auth/$(date +%Y-%m-%d).log 2>/dev/null | head
Common pitfalls (the stuff that breaks quietly)
- Clock drift: if one server is 3 minutes off, your incident timeline becomes guesswork. Fix NTP first.
- Hostname surprises: rsyslog uses the sender hostname. If all clients claim
localhost, your central directory becomes useless. Set proper hostnames withhostnamectl. - Firewall “half-open”: the collector listens, but a security group blocks traffic. Always confirm with
sson the collector andopenssl s_clientfrom a client. - Log loops: forwarding from the collector back to itself (or to another forwarder that sends back) creates a storm. Use
stopand keep forwarding rules specific. - Too much forwarded: shipping everything (kern, daemon, mail, user) from day one can explode disk. Start with auth + app logs, expand intentionally.
If you already run heavy monitoring and logs, avoid building two competing pipelines by accident. A common split is: rsyslog for compliance-grade raw logs, and a separate observability stack for search and debugging. If that’s your direction, see HostMyCode’s log shipping with Loki guide for the developer-troubleshooting side.
Rollback plan (how to back out safely)
You shouldn’t need a dramatic rollback, but you should know exactly how to return to a known-good state.
- Client rollback: comment out or remove
/etc/rsyslog.d/90-forward-to-collector.conf, then:
sudo rsyslogd -N1
sudo systemctl restart rsyslog
- Collector rollback: remove
/etc/rsyslog.d/20-central-tls.confand the firewall rule, then restart rsyslog. - Data rollback: don’t delete central logs just because you roll back config. Keep them until retention expires, unless policy explicitly says otherwise.
Before any change, take a snapshot or at least back up /etc/rsyslog*. If you want a reliable pattern for VPS snapshots plus file-level backups, HostMyCode’s snapshot backup automation is a good baseline.
Next steps: tighten this into a compliance-friendly system
- Enable mutual TLS: issue client certificates and set
StreamDriverAuthMode="x509/name"with permitted client CNs. - Add immutability controls: append-only storage options, WORM-like buckets, or immutable snapshots, depending on your audit needs.
- Document log scope: which facilities, which servers, and why. Auditors like written intent as much as tooling.
- Central alerting: add a lightweight rule set for obvious signals (repeated sudo failures, ssh brute force, unexpected service restarts).
If you’re building a central log collector, choose a VPS with predictable I/O and enough disk for your retention window. Start with a HostMyCode VPS, and if you don’t want to maintain the base OS hardening yourself, consider managed VPS hosting for the collector node.
FAQ
Do I need rsyslog if I already use journald?
journald works well for local logging, but it isn’t a central transport by itself. rsyslog (or systemd-journal-remote) gives you a straightforward way to ingest logs centrally over an encrypted channel.
Should I forward all logs or only security-relevant logs?
Start with auth/sudo and your application facility. Add more only when you can justify it. Compliance is about meeting defined requirements, not collecting everything forever.
Is TLS enough to prevent log tampering?
TLS prevents interception and modification in transit. It doesn’t stop a privileged user on the collector from deleting files. For stronger guarantees, add access controls, immutable snapshots, and (for some regimes) offsite WORM storage.
What’s the fastest way to prove it’s working?
Use logger on a client, then grep for the message on the collector. For transport validation, openssl s_client confirms the TLS chain and handshake quickly.
Summary
Good compliance logging is deliberately unglamorous: centralize, encrypt, tag, retain, verify. Once that baseline is in place, you can layer search and dashboards on top without betting your audit trail on a single UI. If you’re collecting from multiple servers, a dedicated HostMyCode VPS with clear disk sizing and backups makes the first “what happened?” request a lot easier to answer with evidence.