
Most VPS monitoring failures aren’t about missing fancy dashboards. They come from basic visibility gaps.
You don’t notice memory pressure until the kernel starts killing PHP-FPM. You miss disk growth until MySQL refuses writes. And you never catch the one noisy cron job that turns load average into a daily incident.
This tutorial shows you how to set up Linux VPS monitoring with Netdata on a production server. Then you’ll lock it down, enable the collectors that matter on common web stacks, and add alerts that surface real problems early.
The commands below use Ubuntu 24.04 LTS (systemd). The same approach works on Debian 12, AlmaLinux 9/10, and Rocky Linux 9. Expect small differences in paths and package names.
If you host client sites, reseller workloads, or WordPress, this style of monitoring pays for itself. It often prevents the first avoidable outage.
What you’ll build (and what you should have ready)
- Netdata Agent running on your VPS, collecting system + service metrics
- Access restricted via localhost-only or reverse proxy + basic auth
- Alerts configured for: disk usage, inode exhaustion, swap/oom risk, load spikes, MySQL latency, and Nginx/Apache errors
- Verification commands so you can confirm collectors and alerts are actually working
Prereqs: root (or sudo) on a VPS, 512MB–1GB RAM minimum, outbound HTTPS allowed for package downloads.
If you’re still building your baseline server, do that first. Add monitoring after the base is stable.
A HostMyCode VPS works well here because you control the OS, firewall, and web stack end-to-end.
Related guides that pair well with this setup: Linux VPS setup checklist and the VPS resource monitoring setup overview.
Step 1 — Install Netdata Agent on Ubuntu/Debian or AlmaLinux/Rocky
As of 2026, Netdata’s kickstart script is the most practical install path. It installs the agent, pulls dependencies, and sets you up for sane upgrades.
You can control behavior with flags. For example, you can disable telemetry.
Ubuntu 24.04 / Debian 12
sudo apt update
sudo apt install -y curl ca-certificates
# Install Netdata
curl -sS https://get.netdata.cloud/kickstart.sh | sudo bash -s -- \
--stable-channel \
--disable-telemetry
AlmaLinux / Rocky (9/10)
sudo dnf install -y curl ca-certificates
curl -sS https://get.netdata.cloud/kickstart.sh | sudo bash -s -- \
--stable-channel \
--disable-telemetry
Verify the service:
sudo systemctl status netdata --no-pager
ss -lntp | grep 19999
Netdata listens on TCP 19999 by default. Treat this as an internal-only port until you lock access down.
Step 2 — Lock down Netdata access (choose one safe pattern)
You have two sane defaults. Keep Netdata bound to localhost, or put it behind a reverse proxy with TLS and authentication.
If you want browser access without exposing a raw monitoring port, the reverse proxy option is usually the sweet spot.
Option A (simplest): bind Netdata to localhost only
Edit Netdata’s main config. On most installs the canonical path is:
sudo mkdir -p /etc/netdata
sudo nano /etc/netdata/netdata.conf
Add or update:
[web]
bind to = 127.0.0.1
Restart and verify:
sudo systemctl restart netdata
ss -lntp | grep 19999
Access it through SSH port forwarding:
ssh -L 19999:127.0.0.1:19999 root@YOUR_SERVER_IP
Then open http://localhost:19999 in your browser.
Option B (recommended): Nginx reverse proxy + basic auth + TLS
This keeps Netdata private and uses your existing web server. It terminates TLS with Let’s Encrypt and forces authentication.
It’s also easier to share with teammates than SSH tunnels.
1) Install Nginx + tools:
sudo apt update
sudo apt install -y nginx apache2-utils
2) Create a password file:
sudo htpasswd -c /etc/nginx/.htpasswd netdataadmin
3) Create an Nginx site (replace monitor.example.com):
sudo nano /etc/nginx/sites-available/netdata
server {
listen 80;
server_name monitor.example.com;
location / {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass http://127.0.0.1:19999;
}
}
4) Enable the site and reload Nginx:
sudo ln -s /etc/nginx/sites-available/netdata /etc/nginx/sites-enabled/netdata
sudo nginx -t
sudo systemctl reload nginx
5) Issue TLS with Certbot:
sudo apt install -y certbot python3-certbot-nginx
sudo certbot --nginx -d monitor.example.com
6) Ensure Netdata listens only on localhost (as in Option A). That way only Nginx can reach it.
If you already run reverse proxies, HostMyCode’s guide is a good companion: Nginx SSL reverse proxy configuration guide. The OS version differs, but the Nginx layout is the same on Ubuntu 24.04.
Step 3 — Put a firewall rule in place (don’t rely on “security by obscurity”)
If you picked Option A (localhost-only), nothing should listen on the public interface. In that case, a public rule for 19999 usually isn’t needed.
Add the rule anyway. Firewalls are your last line of defense against future misconfigurations.
With nftables, allow only SSH and web, and explicitly block 19999. For a clean baseline, use the dedicated guide: Linux VPS firewall setup with nftables.
Quick verification:
# From another machine on the internet:
curl -I http://YOUR_SERVER_IP:19999/
You want a timeout or connection refused. If it loads, stop and fix access control before you continue.
Step 4 — Enable common collectors (Nginx/Apache, MySQL/MariaDB, PHP-FPM)
Netdata auto-detects a lot. For hosting stacks, it’s still worth being explicit.
Enable the collectors you care about. Then confirm they are scraping successfully.
Filenames can vary slightly by Netdata version, but these locations are steady:
/etc/netdata/python.d/
/etc/netdata/go.d/
/etc/netdata/apps_groups.conf
Nginx: expose stub_status safely
Netdata reads Nginx metrics from the stub_status endpoint. Add a location that only localhost can reach.
Edit your Nginx site (often /etc/nginx/sites-available/default or your vhost):
location /nginx_status {
stub_status;
allow 127.0.0.1;
deny all;
}
Reload Nginx:
sudo nginx -t
sudo systemctl reload nginx
Verify locally:
curl -s http://127.0.0.1/nginx_status
Then enable the Netdata Nginx collector. If you have a go.d/nginx.conf file, that’s usually the preferred collector on current Netdata builds:
sudo nano /etc/netdata/go.d/nginx.conf
jobs:
- name: local
url: http://127.0.0.1/nginx_status
Restart Netdata:
sudo systemctl restart netdata
Confirm charts exist: open Netdata and search for “nginx” charts, or check logs:
sudo journalctl -u netdata -n 200 --no-pager | grep -i nginx
Apache: enable server-status with localhost-only access
If you run Apache (alone or behind Nginx), enable mod_status and keep it local.
sudo a2enmod status
sudo nano /etc/apache2/mods-enabled/status.conf
Set:
<Location /server-status>
SetHandler server-status
Require local
</Location>
Reload Apache:
sudo systemctl reload apache2
curl -s http://127.0.0.1/server-status?auto | head
MySQL/MariaDB: create a least-privilege monitoring user
Don’t point Netdata at MySQL as root. Create a small, read-only account meant for metrics.
Log into MySQL:
sudo mysql
Create the user (adjust password):
CREATE USER 'netdata'@'localhost' IDENTIFIED BY 'CHANGEME_strong_password';
GRANT USAGE, PROCESS, REPLICATION CLIENT, SHOW DATABASES, SHOW VIEW ON *.* TO 'netdata'@'localhost';
FLUSH PRIVILEGES;
Configure Netdata’s MySQL collector (commonly go.d/mysql.conf):
sudo nano /etc/netdata/go.d/mysql.conf
jobs:
- name: local
dsn: netdata:CHANGEME_strong_password@tcp(127.0.0.1:3306)/
Restart Netdata and verify logs:
sudo systemctl restart netdata
sudo journalctl -u netdata -n 200 --no-pager | grep -i mysql
If your sites are already slow, measure before you change things. Enable the slow query log and fix the real bottlenecks: MySQL slow query log tutorial.
PHP-FPM: make sure Netdata can see the process + pool behavior
Netdata tracks processes out of the box. For PHP-FPM pool details, enable an FPM status page and keep it local.
For PHP 8.3 on Ubuntu (common in 2026 hosting stacks), edit:
sudo nano /etc/php/8.3/fpm/pool.d/www.conf
Enable:
pm.status_path = /php-fpm-status
ping.path = /php-fpm-ping
Reload PHP-FPM:
sudo systemctl reload php8.3-fpm
Expose those endpoints via Nginx but restrict to localhost (example snippet inside your server block):
location ~ ^/(php-fpm-status|php-fpm-ping)$ {
allow 127.0.0.1;
deny all;
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php8.3-fpm.sock;
}
Verify locally:
curl -s http://127.0.0.1/php-fpm-ping
curl -s http://127.0.0.1/php-fpm-status
Step 5 — Tune retention and performance so Netdata stays lightweight
Netdata is efficient, but small VPS plans still have limits. Keep retention realistic, trim collectors you don’t use, and avoid turning your metrics database into your largest disk consumer.
Edit /etc/netdata/netdata.conf. Focus on DB mode and retention.
This is a safe baseline for many hosting VPS nodes:
[db]
mode = dbengine
# Keep ~1-3 days of detailed data on smaller VPS
retention = 36h
[web]
bind to = 127.0.0.1
Restart and watch resource use:
sudo systemctl restart netdata
ps -o pid,cmd,%cpu,%mem -C netdata
If you hit OOM during traffic spikes, fix memory pressure first. ZRAM plus sensible swappiness often prevents avoidable crashes: Linux VPS swap tuning.
Step 6 — Configure alerts that match hosting failure modes
Default alerts are fine for a lab. Hosting needs a few extra tripwires.
Start with disk space, inode usage, MySQL latency, and load that stays high long enough to indicate queueing.
Netdata’s health configuration lives under:
/etc/netdata/health.d/
Start by locating example health config (Netdata ships templates):
sudo ls /usr/lib/netdata/conf.d/health.d/ 2>/dev/null || true
sudo ls /usr/libexec/netdata/conf.d/health.d/ 2>/dev/null || true
If your distribution differs, find shipped health templates:
sudo find / -path '*netdata*health.d*' -maxdepth 4 2>/dev/null | head -n 50
Create a custom file so updates don’t overwrite your rules:
sudo nano /etc/netdata/health.d/hostmycode-basics.conf
Alert 1: disk almost full (hosting killer)
Disk alerts should fire before you’re out of runway. On busy WordPress servers with logs and backups, 85% is a useful early warning.
alarm: disk_space_usage_high
on: disk.space
lookup: average -1m percentage of /
warn: $this > 85
crit: $this > 92
info: Root filesystem usage is high. Clean logs, cache, backups, or expand storage.
class: system
Alert 2: inode exhaustion (surprisingly common with caches)
“No space left on device” can show up even with plenty of free GB. Inodes often run out first on cache-heavy workloads.
alarm: inode_usage_high
on: disk.inodes
lookup: average -1m percentage of /
warn: $this > 80
crit: $this > 90
info: Inode usage is high. Check cache directories and mail spools.
class: system
Alert 3: sustained load average (queue, not just CPU)
Load alerts matter when they are sustained and tied to CPU count. A short spike usually isn’t the real issue.
alarm: load_sustained
on: system.load
lookup: average -10m unaligned of load1
warn: $this > ($system_cpu_cores * 1.0)
crit: $this > ($system_cpu_cores * 2.0)
info: Sustained load indicates queueing (CPU, disk IO, or runnable tasks).
class: system
Restart Netdata to load health rules:
sudo systemctl restart netdata
Verify alerts are loaded:
sudo journalctl -u netdata -n 200 --no-pager | grep -i health
Step 7 — Add notifications (email is fine; just make it reliable)
Netdata supports multiple notification channels. Email is still the most universal option for hosting teams.
Email also works well when you route alerts to an on-call alias. The weak point is delivery.
If your VPS can’t send reliably, you’ll miss the one alert that matters.
If SMTP is already in place, connect Netdata to it. If not, fix deliverability first (SPF/DKIM/DMARC, rDNS, TLS): VPS email deliverability checklist.
Netdata’s notification config is typically:
/etc/netdata/health_alarm_notify.conf
Edit it:
sudo nano /etc/netdata/health_alarm_notify.conf
Enable email and set recipients (example):
SEND_EMAIL="YES"
DEFAULT_RECIPIENT_EMAIL="alerts@example.com"
Send a test notification (Netdata includes a test mode):
sudo -u netdata /usr/libexec/netdata/plugins.d/alarm-notify.sh test
If the path differs on your OS, locate it:
sudo find /usr -name 'alarm-notify.sh' 2>/dev/null
Step 8 — Use Netdata to troubleshoot real incidents (quick runbook)
Monitoring only matters if it cuts your time-to-answer. Use Netdata to narrow the problem fast.
Then confirm with a couple of targeted commands.
Problem: site returns 502/504, but CPU looks fine
- Check PHP-FPM charts: are you hitting
pm.max_childrenlimits? - Check system RAM: rising page faults and swap activity point to memory pressure.
- Check disk IO: high
iowaitwith low CPU usage often means storage contention.
Verification outside Netdata:
sudo systemctl status php8.3-fpm --no-pager
sudo tail -n 200 /var/log/nginx/error.log
sudo tail -n 200 /var/log/php8.3-fpm.log 2>/dev/null || true
Problem: “MySQL has gone away” or random WordPress admin slowness
- Look for MySQL query time and connection spikes.
- Correlate with OOM kills or swap storms in the same time window.
- If latency climbs during backups, move backups off-peak or throttle IO.
Outside Netdata:
sudo journalctl -k -n 200 --no-pager | grep -i -E 'oom|killed process'
sudo mysqladmin status
sudo mysqladmin processlist | head
Problem: disk fills up every few days
- Check Netdata’s disk usage trend: linear growth usually means logs or backups.
- Look for inode usage growth: caches and mail queues can eat inodes.
Find top offenders:
sudo du -xhd1 /var | sort -h
sudo find /var -xdev -type f -size +200M -print 2>/dev/null | head -n 50
sudo df -h /
sudo df -ih /
If you’ve had a “disk full outage” once, treat log rotation as a baseline control, not an optional tidy-up: Linux VPS log rotation setup.
Step 9 — Common pitfalls (so you don’t create new problems)
- Exposing port 19999 publicly: bots will find it. Always bind to localhost or proxy it with auth + TLS.
- Monitoring user too privileged: for MySQL, use a least-privilege account. No root credentials in plain text.
- Status endpoints open to the internet: Nginx
/nginx_statusand Apache/server-statusmust be localhost-only. - Alert fatigue: cut noise and keep alerts tied to outages: disk, memory pressure, sustained load, service health.
- Skipping verification: always check the collector is producing charts and logs show successful scraping.
Where this fits in a hosted stack at HostMyCode
If you manage multiple customer sites, a common pattern is Netdata per server. Standardize your rules across nodes.
This keeps troubleshooting consistent, even when workloads change.
If you’d rather not own OS patching and baseline hardening, managed VPS hosting is the alternative. You keep control of the application stack, while routine platform maintenance stays predictable.
For higher-traffic workloads, give monitoring enough headroom. A small VPS can run Netdata, but your web stack still needs CPU and RAM for real traffic.
If you keep scaling just to hold latency steady, consider moving to a larger HostMyCode VPS plan. Don’t trim monitoring until it’s blind.
Want monitoring that helps during incidents instead of adding noise? Start with a VPS you can harden properly. Spin up a HostMyCode VPS for full root access, or choose managed VPS hosting if you want OS-level maintenance handled while you retain control of the application stack.
FAQ
Is Netdata safe to expose on the public internet?
Not directly. Bind it to 127.0.0.1 and expose it only through a reverse proxy with TLS and authentication, or use SSH port forwarding.
Will Netdata slow down my VPS?
On most VPS plans it’s lightweight, but it’s still a daemon collecting metrics. Keep retention modest (for example 36–72 hours on small servers) and disable collectors you don’t use.
Can I monitor WordPress sites with Netdata?
Yes. Netdata won’t replace application-level APM, but it’s excellent for the hosting layer: PHP-FPM saturation, MySQL latency, disk growth, and sustained load that causes slow admin screens.
What’s the first alert I should set up?
Disk space and inode usage. Disk-related failures are among the most avoidable outages on hosting servers, and alerts usually give you hours or days of warning.
How do I validate that MySQL monitoring works?
Confirm the collector logs show successful connections, and check Netdata for MySQL charts. If charts are missing, re-check the DSN, credentials, and that MySQL listens on 127.0.0.1:3306.