
Most VPS compromises don’t start with a dramatic “rooted” moment. They start with a small, easy-to-miss change: a new SSH key, a binary quietly swapped in /usr/local/bin, or a cron job slipped under a service account. Linux auditd log monitoring helps you answer the questions that matter during an incident: which user changed what, from where, and via which process—without trusting application logs to tell the truth.
This is an ops-first walkthrough for a single Linux VPS (or a small fleet). You’ll install auditd, add a tight rule set, forward events into the systemd journal, generate a daily “diff-style” email, and wire up a low-noise alert for the sensitive files that should almost never change.
Scenario and goals (what you’ll catch, and what you won’t)
Concrete setup: a small internal API behind Nginx on a Debian 13 VPS. You want to detect:
- SSH key and SSH config changes (
/etc/ssh/sshd_config,~/.ssh/authorized_keys) - Privilege boundary changes (edits to
/etc/sudoersand/etc/sudoers.d/) - Account and group changes (
/etc/passwd,/etc/shadow,/etc/group) - Service definition changes (
/etc/systemd/system/) - Suspicious execution patterns (new binaries executed from writable locations)
What this won’t do: turn auditd into a malware scanner. Audit logs are best at evidence and accountability. If you need deeper coverage later, add file integrity monitoring or a scanner in your build/deploy pipeline.
Prerequisites
- A Linux VPS with root access (Debian 13 used in examples; Ubuntu 24.04 and Rocky/Alma variants are similar)
- systemd present (for journald integration)
- Basic comfort with SSH and editing files under
/etc - An outbound mail path for alerts (we’ll use
msmtpto relay via SMTP)
If you’re still picking a host for this, start with something predictable. A HostMyCode VPS gives you clean root access, kernel auditing control, and enough headroom to keep audit queues from dropping events during spikes.
Why Linux auditd log monitoring still matters in 2026
In 2026, plenty of teams still run “just one VPS” workloads that never get a full SIEM pipeline. auditd closes that gap at the OS boundary. It also pays off during incident response, because it helps reconstruct who touched critical files—even if application logs were rotated, truncated, or tampered with.
Keep two rules of thumb in mind:
- Audit rules are policy. Too many rules create noise and overhead. Start narrow, then expand with intent.
- Dropped audit events are worse than no audit. A smaller, reliable ruleset beats a huge one that loses data under load.
Step 1: Install auditd and helpers
On Debian 13:
sudo apt update
sudo apt install -y auditd audispd-plugins jq msmtp msmtp-mta
Expected output (trimmed):
Setting up auditd ...
Created symlink /etc/systemd/system/multi-user.target.wants/auditd.service ...
Verify the daemon is running:
sudo systemctl status auditd --no-pager
● auditd.service - Security Audit Logging Service
Active: active (running)
Step 2: Enable useful auditd defaults (queue/backlog tuning)
Open /etc/audit/auditd.conf and set conservative, VPS-friendly values. The target here is simple: don’t drop events, and don’t keep logs forever.
sudo nano /etc/audit/auditd.conf
Recommended baseline:
# /etc/audit/auditd.conf
log_format = ENRICHED
flush = INCREMENTAL_ASYNC
freq = 50
max_log_file = 64
num_logs = 10
max_log_file_action = ROTATE
space_left_action = SYSLOG
admin_space_left_action = SUSPEND
disk_full_action = SUSPEND
disk_error_action = SUSPEND
Now set a larger kernel backlog to absorb bursts. Add this to /etc/default/grub:
sudo nano /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet audit=1 audit_backlog_limit=8192"
Apply and reboot:
sudo update-grub
sudo reboot
After reboot, verify auditing is enabled:
sudo auditctl -s
enabled 1
backlog_limit 8192
If you’re tightening SSH, sudo, and baseline access controls at the same time, use a broader checklist alongside this: How to harden your Linux VPS for production in 2026.
Step 3: Create a tight audit rule set (low noise, high value)
Audit rules live under /etc/audit/rules.d/. Keep your customizations in a single file so you can review changes, roll back quickly, and explain the intent later.
sudo nano /etc/audit/rules.d/70-vps-integrity.rules
Paste this (the comments make future-you’s life easier):
# /etc/audit/rules.d/70-vps-integrity.rules
# 1) Identity files: user/group changes
-w /etc/passwd -p wa -k identity
-w /etc/shadow -p wa -k identity
-w /etc/group -p wa -k identity
-w /etc/gshadow -p wa -k identity
# 2) Sudo policy changes
-w /etc/sudoers -p wa -k sudo_policy
-w /etc/sudoers.d/ -p wa -k sudo_policy
# 3) SSH daemon config + host keys
-w /etc/ssh/sshd_config -p wa -k sshd
-w /etc/ssh/sshd_config.d/ -p wa -k sshd
-w /etc/ssh/ssh_host_ed25519_key -p wa -k sshd
# 4) Systemd unit overrides (common persistence spot)
-w /etc/systemd/system/ -p wa -k systemd_units
# 5) Cron persistence spots
-w /etc/crontab -p wa -k cron
-w /etc/cron.d/ -p wa -k cron
-w /etc/cron.daily/ -p wa -k cron
-w /etc/cron.hourly/ -p wa -k cron
# 6) Watch common writable execution paths for new/modified binaries
-w /usr/local/bin/ -p wa -k localbin
-w /opt/ -p wa -k opt_changes
# 7) Track successful execve by non-system users (UID>=1000), helps spot odd tooling
-a always,exit -F arch=b64 -S execve -F auid>=1000 -F auid!=4294967295 -k user_exec
# 8) Detect attempts to tamper with audit rules/logs
-w /etc/audit/ -p wa -k audit_config
-w /var/log/audit/ -p wa -k audit_logs
Load and compile rules:
sudo augenrules --load
Expected output (varies):
No change
Verify rules are active:
sudo auditctl -l | sed -n '1,25p'
Quick sanity check: you should see watches on /etc/passwd, /etc/sudoers, and a few -a always,exit rules.
Step 4: Send audit events to journald (so you can query with journalctl)
auditd writes to /var/log/audit/audit.log by default. Keep that. The extra step here is forwarding key events into journald so you can use the same tools you already use for system debugging.
Edit the audisp syslog plugin config:
sudo nano /etc/audit/plugins.d/syslog.conf
Set:
active = yes
direction = out
path = builtin_syslog
type = builtin
args = LOG_INFO
format = string
Restart auditd:
sudo systemctl restart auditd
Now you can query audit messages via the journal (they appear as audisp-syslog or similar depending on distro packaging):
sudo journalctl -S -10m | grep -E 'audit\(|AUDIT_' | tail -n 5
If you want a lightweight monitoring UI later, pair audit visibility with basic host metrics. This is a good fit: Beszel server monitoring tool.
Step 5: Configure outbound email alerts (msmtp)
You’ll send two kinds of email: a daily summary, and a fast “hot” alert for a small set of keys (sudo/sshd/identity). msmtp is a good match for a VPS because it relays mail without running a full MTA stack.
Create /etc/msmtprc:
sudo nano /etc/msmtprc
# /etc/msmtprc
defaults
auth on
tls on
tls_starttls on
logfile /var/log/msmtp.log
account opsrelay
host smtp.example.net
port 587
user vps-alerts@example.net
passwordeval "cat /root/.secrets/msmtp-pass"
from vps-alerts@example.net
account default : opsrelay
Create the password file and lock permissions:
sudo install -d -m 0700 /root/.secrets
sudo sh -c 'printf "%s" "REPLACE_WITH_SMTP_PASSWORD" > /root/.secrets/msmtp-pass'
sudo chmod 0600 /root/.secrets/msmtp-pass
sudo chmod 0600 /etc/msmtprc
Test mail delivery:
printf "Subject: auditd test\n\nMail path is working on $(hostname).\n" | msmtp ops@example.com
If it fails, go straight to /var/log/msmtp.log. Another common snag is outbound SMTP being blocked by policy; if that happens, relay through your transactional email provider’s SMTP endpoint.
Step 6: Build a daily audit report (human-readable, keyed)
Raw audit logs are noisy by design. The approach that works in practice: summarize by key (-k), and only report what happened since the last run.
Create a script at /usr/local/sbin/audit-daily-report.sh:
sudo nano /usr/local/sbin/audit-daily-report.sh
#!/usr/bin/env bash
set -euo pipefail
STATE_DIR="/var/lib/audit-report"
LAST_FILE="$STATE_DIR/last"
NOW="$(date -Is)"
HOST="$(hostname -f 2>/dev/null || hostname)"
TO="ops@example.com"
mkdir -p "$STATE_DIR"
if [[ -f "$LAST_FILE" ]]; then
SINCE="$(cat "$LAST_FILE")"
else
# First run: keep it short
SINCE="$(date -Is -d '6 hours ago')"
fi
echo "$NOW" > "$LAST_FILE"
TMP="$(mktemp)"
{
echo "Host: $HOST"
echo "Window: $SINCE -> $NOW"
echo
echo "== High-signal changes (identity/sudo/sshd/systemd/cron) =="
for k in identity sudo_policy sshd systemd_units cron audit_config; do
echo
echo "-- key=$k --"
ausearch -k "$k" -ts "$SINCE" 2>/dev/null | aureport -f -i 2>/dev/null || true
done
echo
echo "== Executions by non-system users (key=user_exec) =="
ausearch -k user_exec -ts "$SINCE" 2>/dev/null | tail -n 80 || true
echo
echo "Tip: use 'ausearch -k <key> -i' for readable decoded records."
} > "$TMP"
SUBJECT="[auditd] Daily report on $HOST"
{
echo "Subject: $SUBJECT"
echo
cat "$TMP"
} | msmtp "$TO"
rm -f "$TMP"
Make it executable:
sudo chmod 0750 /usr/local/sbin/audit-daily-report.sh
Run once manually:
sudo /usr/local/sbin/audit-daily-report.sh
Expected result: an email with one section per audit key. On a quiet server, the first run may be nearly empty.
Step 7: Add a “hot” alert for critical file edits (near real-time)
The daily report gives you context. The hot alert gives you speed. It runs every two minutes and only triggers on keys tied to sensitive access paths.
Create a small watcher that keeps state in a file and uses ausearch over a short time window. Create /usr/local/sbin/audit-hot-alert.sh:
sudo nano /usr/local/sbin/audit-hot-alert.sh
#!/usr/bin/env bash
set -euo pipefail
STATE_DIR="/var/lib/audit-report"
LAST_FILE="$STATE_DIR/hot-last"
NOW="$(date -Is)"
HOST="$(hostname -f 2>/dev/null || hostname)"
TO="ops@example.com"
mkdir -p "$STATE_DIR"
if [[ -f "$LAST_FILE" ]]; then
SINCE="$(cat "$LAST_FILE")"
else
SINCE="$(date -Is -d '5 minutes ago')"
fi
echo "$NOW" > "$LAST_FILE"
# Only the stuff that should almost never change
KEYS=(sudo_policy sshd identity)
ALERT_BODY=""
for k in "${KEYS[@]}"; do
OUT="$(ausearch -k "$k" -ts "$SINCE" -te "$NOW" -i 2>/dev/null || true)"
if [[ -n "$OUT" ]]; then
ALERT_BODY+=$'\n'
ALERT_BODY+="=== key=$k ($SINCE -> $NOW) ==="$'\n'
ALERT_BODY+="$OUT"$'\n'
fi
done
if [[ -n "$ALERT_BODY" ]]; then
SUBJECT="[auditd] HOT alert on $HOST: sensitive config changed"
{
echo "Subject: $SUBJECT"
echo
echo "Host: $HOST"
echo "Window: $SINCE -> $NOW"
echo
echo "$ALERT_BODY"
} | msmtp "$TO"
fi
Permissions:
sudo chmod 0750 /usr/local/sbin/audit-hot-alert.sh
Create a systemd timer instead of cron (cleaner to manage, and easier to review later). Add a service unit:
sudo nano /etc/systemd/system/audit-hot-alert.service
[Unit]
Description=Auditd hot alert check
[Service]
Type=oneshot
ExecStart=/usr/local/sbin/audit-hot-alert.sh
Add the timer:
sudo nano /etc/systemd/system/audit-hot-alert.timer
[Unit]
Description=Run auditd hot alerts every 2 minutes
[Timer]
OnBootSec=2min
OnUnitActiveSec=2min
AccuracySec=30s
[Install]
WantedBy=timers.target
Enable and start:
sudo systemctl daemon-reload
sudo systemctl enable --now audit-hot-alert.timer
sudo systemctl list-timers --all | grep audit-hot-alert
Expected output:
audit-hot-alert.timer ... next ...
Step 8: Verification (generate events on purpose)
Don’t treat this as “done” until you’ve forced a couple of known-good events through the pipeline. You’re verifying three things: auditd records the change, your rules catch it, and email actually leaves the box.
Test 1: sudoers change (don’t break sudo)
echo "# audit test $(date -Is)" | sudo tee -a /etc/sudoers.d/99-audit-test >/dev/null
sudo chmod 0440 /etc/sudoers.d/99-audit-test
Query audit by key:
sudo ausearch -k sudo_policy -ts recent -i | tail -n 20
You should see a record showing the file path under /etc/sudoers.d/ and the acting user.
Test 2: sshd config touch
sudo cp -a /etc/ssh/sshd_config /etc/ssh/sshd_config.bak.audit
sudo sed -i 's/^#\?ClientAliveInterval.*/ClientAliveInterval 300/' /etc/ssh/sshd_config
sudo sshd -t
Expected output from sshd -t: no output (exit code 0). If you get an error, revert immediately using the backup.
Now check:
sudo ausearch -k sshd -ts recent -i | tail -n 30
Within 2 minutes, the hot-alert timer should email you. If it doesn’t, run the alert script manually and check /var/log/msmtp.log.
Common pitfalls (and how to avoid them)
- Noise explosion from exec rules. The
user_execrule can get loud on developer machines. On a production VPS with a small set of SSH users, it’s usually fine. If it’s still too chatty, remove it or scope it to a single user using-F auid=1001. - Audit backlog drops under load. Check
auditctl -sand watchlost/backlog. If you see drops, trim rules first, then raiseaudit_backlog_limit, and only then consider adding CPU. - Email alerts failing silently. msmtp only logs to
/var/log/msmtp.logif it can write there. Fix permissions or change the logfile path. - Watching directories that don’t exist. Some distros don’t ship
/etc/ssh/sshd_config.d/. If it’s missing, create it or remove that watch so rule loading stays clean. - Blocking your own incident response. If you set disk-full actions to SUSPEND, you must monitor disk usage. Suspending auditing on disk-full is the right safety behavior, but it should trigger an ops response.
For general VPS troubleshooting patterns (especially when resource pressure causes odd symptoms), keep this handy: Troubleshooting high memory usage on Linux VPS.
Rollback plan (cleanly revert if you need to)
If the ruleset creates performance issues or you need to narrow scope fast, roll back in this order:
- Disable the hot alert timer (stops emails and script runs):
sudo systemctl disable --now audit-hot-alert.timer - Remove the custom rules file:
sudo rm -f /etc/audit/rules.d/70-vps-integrity.rules sudo augenrules --load - Disable journald forwarding by setting
active = noin/etc/audit/plugins.d/syslog.conf, then:sudo systemctl restart auditd - Revert SSH config test change if you made one:
sudo cp -a /etc/ssh/sshd_config.bak.audit /etc/ssh/sshd_config sudo sshd -t sudo systemctl reload ssh
Fully disabling audit is rarely necessary, but if you must, remove audit=1 from GRUB and reboot. Treat that like disabling a smoke alarm: sometimes justified, always deliberate.
Next steps (where to take this after the first week)
- Centralize logs with a log pipeline (syslog-ng/Vector/Fluent Bit) so one compromised node can’t erase its own evidence.
- Add file integrity monitoring for
/usr/binand/etcif you maintain compliance controls. - Harden your container supply chain if you deploy via Docker images; auditd tells you what ran, but it won’t validate image provenance. See: Container registry security hardening with Harbor and Trivy.
- Scale to multiple VPS nodes with a consistent baseline. If you want less day-2 ops, managed VPS hosting can take patching and core monitoring off your plate while you keep control of your application stack.
Summary
Linux auditd log monitoring works best with a narrow, defensible scope: identity files, sudo policy, SSH config, systemd units, and cron. Forward events into journald for quick inspection, then layer on a daily email and a low-noise hot alert for the files that should almost never change.
Roll this out the same way you’d roll out any control that can page you: start on one host, confirm you aren’t dropping audit events, then copy the baseline across the fleet. A steady baseline VPS helps, too; the simplest path is a HostMyCode VPS sized to avoid audit backlog pressure during peak traffic.
If you’re building security visibility across a small fleet, the baseline matters. Start with a HostMyCode VPS so you keep full control of kernel auditing, or choose managed VPS hosting if you want the patching and core monitoring handled while you focus on detection rules and your apps.
FAQ
Does auditd replace Fail2Ban or a WAF?
No. Fail2Ban and WAFs block traffic patterns. auditd records OS-level actions after a request becomes a process. Use them together: prevention at the edge, evidence at the host.
How much overhead should I expect from auditd on a typical VPS?
With a small watch-based ruleset (file/directory watches plus a single exec rule), overhead is usually low and predictable. The real risk is rule bloat; if you add syscall-heavy rules broadly, you can increase CPU usage and log volume quickly.
Where do audit logs live, and how long are they kept?
By default, auditd writes to /var/log/audit/audit.log and rotates per /etc/audit/auditd.conf. In this setup, key events are also forwarded to syslog/journald for easier querying.
How do I quickly see what changed in sudoers last hour?
sudo ausearch -k sudo_policy -ts "1 hour ago" -i
Look for the name= field showing the file path and the decoded user information.
What’s the safest way to test sshd config changes while monitoring?
Always run sshd -t before reloading SSH, and keep an existing root session open while you validate. Use a backup file (as shown above) so rollback is a single copy command.