Back to tutorials
Tutorial

OpenTelemetry Collector Setup on a Linux VPS: Metrics + Logs for Nginx, MySQL, and System Health (2026 Tutorial)

OpenTelemetry Collector setup on a Linux VPS for metrics and logs—Nginx, MySQL, and system telemetry with real verification steps.

By Anurag Singh
Updated on Apr 26, 2026
Category: Tutorial
Share article
OpenTelemetry Collector Setup on a Linux VPS: Metrics + Logs for Nginx, MySQL, and System Health (2026 Tutorial)

Most VPS monitoring failures aren’t caused by a lack of dashboards. They happen when you lose context. You see a spike in 502s with no matching Nginx errors, slow queries with no host CPU story, or a disk filling up quietly until it’s too late.

This tutorial walks you through OpenTelemetry Collector setup on a Linux VPS. You’ll ship consistent metrics and logs without locking yourself into one vendor’s agent.

You’ll install the collector on Ubuntu 24.04 LTS (the same flow works on Debian 12). You’ll ship:

  • Host metrics: CPU, RAM, disk, filesystem, network
  • Nginx access + error logs
  • MySQL slow query log (optional, but recommended)

Everything exports via OTLP to your monitoring backend (Grafana Cloud, Elastic, Datadog, New Relic, or an in-house OTLP endpoint). If this VPS runs production traffic, leave some headroom.

A 2 vCPU / 2–4 GB RAM instance is a sensible baseline. If you want steady performance and root access for telemetry, start with a HostMyCode VPS instead of shared hosting.

What you’ll build (and what you need)

This is a small, practical collector deployment for common hosting stacks (Nginx + PHP-FPM, WordPress, or a compact app).

At the end, you’ll manage one systemd service and one YAML config file.

  • OS: Ubuntu 24.04 LTS (recommended) or Debian 12
  • Web: Nginx 1.24+ (Ubuntu packages) or newer from vendor repo
  • Database (optional): MySQL 8.0 / 8.4 LTS or MariaDB 10.11+
  • Access: SSH as root or sudo user
  • Destination: An OTLP endpoint (HTTPS preferred) + credentials/token if required

If your VPS isn’t squared away yet (SSH keys, updates, sane defaults), fix that first.

The checklist in Linux VPS Setup Checklist in 2026 covers the items people usually regret skipping.

Step 1 — Create a dedicated service user and directories

Avoid running the collector as root unless you have no choice.

For log reading, grant group access and keep the process unprivileged.

sudo useradd --system --home /var/lib/otelcol --shell /usr/sbin/nologin otelcol
sudo install -d -o otelcol -g otelcol /etc/otelcol /var/lib/otelcol /var/log/otelcol

Config lives at /etc/otelcol/config.yaml.

If you enable any local debug output, keep it in /var/log/otelcol.

Step 2 — Install OpenTelemetry Collector Contrib (systemd)

Use the contrib build here. You need components like the filelog receiver and the hostmetrics receiver.

You also want the wider set of processors and exporters.

On Ubuntu/Debian you can install from official packages. If you can’t use external repos, install the released .deb directly.

The commands below keep it simple and systemd-friendly:

# Ubuntu 24.04 / Debian 12: install contrib collector
# Option A: official package (recommended if available in your environment)
# (If your distro repo doesn't include it, use Option B.)

sudo apt-get update
sudo apt-get install -y wget ca-certificates gnupg

# Option B: download a current .deb release from OpenTelemetry Collector releases
# Replace VERSION as needed (keep it current for 2026).
VERSION="0.120.0"
ARCH="amd64"

wget -O /tmp/otelcol-contrib.deb \
  "https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v${VERSION}/otelcol-contrib_${VERSION}_linux_${ARCH}.deb"

sudo dpkg -i /tmp/otelcol-contrib.deb
sudo apt-get -f install -y

Verify the binary:

otelcol-contrib --version

If the package installed a default systemd unit, you’ll override the config path and runtime user next.

Step 3 — Configure the collector for host metrics + Nginx logs

Create /etc/otelcol/config.yaml with a configuration that’s easy to reason about on a single VPS.

This example exports via OTLP over HTTPS. Replace the endpoint and headers with your provider’s values.

sudo tee /etc/otelcol/config.yaml > /dev/null <<'YAML'
receivers:
  hostmetrics:
    collection_interval: 30s
    scrapers:
      cpu: {}
      load: {}
      memory: {}
      filesystem:
        exclude_fs_types:
          fs_types: ["tmpfs", "devtmpfs", "overlay", "squashfs"]
      disk: {}
      network: {}
      process: {}

  filelog/nginx_access:
    include: ["/var/log/nginx/access.log"]
    start_at: end
    operators:
      - type: regex_parser
        id: nginx_access
        # Nginx 'combined' log format
        regex: '^(?P<client_ip>\S+) \S+ \S+ \[(?P<time>[^\]]+)\] "(?P<method>\S+) (?P<path>[^\s]+) (?P<protocol>\S+)" (?P<status>\d{3}) (?P<bytes>\d+) "(?P<referer>[^"]*)" "(?P<user_agent>[^"]*)"'
      - type: add
        field: attributes.log_type
        value: nginx_access

  filelog/nginx_error:
    include: ["/var/log/nginx/error.log"]
    start_at: end
    operators:
      - type: add
        field: attributes.log_type
        value: nginx_error

processors:
  memory_limiter:
    check_interval: 2s
    limit_percentage: 75
    spike_limit_percentage: 15

  batch:
    send_batch_size: 8192
    timeout: 5s

  attributes/common:
    actions:
      - key: service.name
        value: vps
        action: upsert
      - key: host.role
        value: web
        action: upsert

exporters:
  otlphttp:
    endpoint: "https://OTLP_ENDPOINT.example.com"
    headers:
      Authorization: "Bearer YOUR_TOKEN_HERE"

  # Optional local debug during initial setup. Remove once stable.
  logging:
    verbosity: normal

service:
  telemetry:
    logs:
      level: info
  pipelines:
    metrics:
      receivers: [hostmetrics]
      processors: [memory_limiter, attributes/common, batch]
      exporters: [otlphttp]
    logs:
      receivers: [filelog/nginx_access, filelog/nginx_error]
      processors: [memory_limiter, attributes/common, batch]
      exporters: [otlphttp]
YAML

Why this design? memory_limiter keeps the collector from fighting your web stack during bursts.

batch reduces request overhead. The small attribute set makes filtering and grouping easier later.

Step 4 — Grant Nginx log access without using root

On Ubuntu, Nginx logs are often root:adm with group-read permissions.

Add the collector user to that group.

# Check permissions
ls -l /var/log/nginx/access.log /var/log/nginx/error.log

# Common case on Ubuntu/Debian
sudo usermod -aG adm otelcol

If your logs use a different group (sometimes nginx), use that group instead.

Group membership won’t apply to the running service until you restart it.

Pitfall: Custom per-vhost logs under /var/www/... often end up with tighter permissions. In that case, the collector can’t read them. You’ll see missing data.

Keeping vhost logs under /var/log/nginx/ with consistent permissions is the least painful approach.

Step 5 — Wire up systemd (override unit + run as otelcol)

The .deb usually installs a unit file.

Don’t edit packaged units directly. Use a drop-in override so upgrades don’t clobber your changes.

sudo systemctl cat otelcol-contrib || true

If the unit name differs (sometimes otelcol), list matching services:

systemctl list-unit-files | grep -E 'otelcol'

Create an override:

sudo systemctl edit otelcol-contrib

Paste:

[Service]
User=otelcol
Group=otelcol
ExecStart=
ExecStart=/usr/bin/otelcol-contrib --config=/etc/otelcol/config.yaml

# Hardening (safe defaults for a collector)
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/var/lib/otelcol /var/log/otelcol

# Allow reading logs (still controlled by filesystem permissions)
ReadOnlyPaths=/var/log/nginx

Reload and start:

sudo systemctl daemon-reload
sudo systemctl enable --now otelcol-contrib
sudo systemctl status otelcol-contrib --no-pager

Follow the journal until you see clean exports:

sudo journalctl -u otelcol-contrib -f

Step 6 — Verify telemetry end-to-end (don’t skip this)

“Active (running)” only tells you systemd is happy.

You still need to confirm the collector can read files, scrape metrics, and deliver data to your backend.

Check that the collector can read Nginx logs

sudo -u otelcol head -n 1 /var/log/nginx/access.log
sudo -u otelcol head -n 1 /var/log/nginx/error.log

If you hit “Permission denied”, fix group membership/permissions and restart:

sudo systemctl restart otelcol-contrib

Confirm metrics are being scraped

In your backend, look for host CPU/memory metrics. Check that the host name labels match what you expect.

If nothing shows up, the collector journal usually tells you why. Common causes are TLS failures, auth errors, and wrong endpoints.

Force log activity and confirm ingestion

Create one normal request and one error-producing request:

curl -I http://127.0.0.1/

# A guaranteed 404 (unless you have this route)
curl -I http://127.0.0.1/this-should-404

Confirm those new lines land in your backend with attributes.log_type=nginx_access or nginx_error.

If you want a tighter plan for what to alert on (and what to ignore), pair this with VPS Resource Monitoring Setup: What to Track, What to Ignore, and When to Scale in 2026.

Step 7 — Add MySQL slow query visibility (practical, not noisy)

Host metrics tell you the server is under stress. Nginx logs show what users experienced.

Slow query logs often explain the “why” behind a sudden slowdown at 10:42.

If slow query logging is already enabled, skip to “Ship the slow query log”.

Otherwise, enable it with conservative settings.

On MySQL 8.0/8.4, create a drop-in config:

sudo tee /etc/mysql/conf.d/slow-query.cnf > /dev/null <<'CNF'
[mysqld]
slow_query_log = 1
slow_query_log_file = /var/log/mysql/mysql-slow.log
long_query_time = 1
log_queries_not_using_indexes = 0
CNF

sudo install -d -o mysql -g adm /var/log/mysql
sudo systemctl restart mysql

Confirm it’s active:

sudo mysql -e "SHOW VARIABLES LIKE 'slow_query_log%';"
sudo mysql -e "SHOW VARIABLES LIKE 'long_query_time';"

Pitfall: Setting long_query_time=0 creates a firehose. It also turns telemetry into a bill.

Start at 1 second. Only drop lower for short, targeted investigations.

For deeper analysis and fixes, use: Linux VPS MySQL Slow Query Log Tutorial (2026).

Ship the slow query log with filelog

Add another receiver to /etc/otelcol/config.yaml.

Keep parsing light. You can structure it later in your backend if needed.

sudoedit /etc/otelcol/config.yaml

Insert under receivers::

  filelog/mysql_slow:
    include: ["/var/log/mysql/mysql-slow.log"]
    start_at: end
    operators:
      - type: add
        field: attributes.log_type
        value: mysql_slow

Then add it to the logs pipeline receivers list:

    logs:
      receivers: [filelog/nginx_access, filelog/nginx_error, filelog/mysql_slow]

Grant read access the same way (often root:adm group-readable).

Verify, then restart:

ls -l /var/log/mysql/mysql-slow.log
sudo usermod -aG adm otelcol
sudo systemctl restart otelcol-contrib

Step 8 — Reduce noise: basic log filtering and PII hygiene

Nginx access logs often include query strings. That’s where tokens, emails, and customer IDs tend to show up.

Shipping raw URLs can turn into a privacy and compliance mess fast.

A simple baseline is to strip query strings before export. You still keep the path, method, and status.

In the access log receiver you already extract path. Add a small transform to drop everything after ?.

Add this operator to filelog/nginx_access after the regex parser:

      - type: transform
        id: strip_query
        statements:
          - replace_pattern(attributes.path, "\\?.*$", "")

If your Nginx format uses a different field name than path, adjust the statement.

Redeploy, then verify that URLs in the backend no longer contain query strings.

Step 9 — Performance limits and sizing on a busy VPS

On a typical WordPress VPS, the collector usually stays under 100–200 MB RAM and uses a small slice of a vCPU.

The real costs come from log volume and exporter retry loops when the network or endpoint misbehaves.

  • Increase batch size a bit if your backend charges per request or you see high exporter overhead.
  • Keep collection_interval at 30s for host metrics. 10s can help during debugging, but it adds load and noise.
  • Cap memory with memory_limiter. It’s there to protect the rest of the box.
  • Watch disk if you leave the local logging exporter enabled for long.

If the VPS is already unstable (OOM events, swap thrashing), address that first.

This pairs well with Linux VPS swap tuning: set up zram and swappiness to prevent OOM (2026 tutorial).

Step 10 — Troubleshooting: the failures you’ll actually see

Collector problems are usually predictable.

Debug them in order: receiver (input), processors (middle), exporter (output).

Don’t guess—use the journal.

Exporter errors: TLS, auth, and bad endpoints

  • Symptom: Collector runs, but nothing appears in backend.
  • Check: journalctl -u otelcol-contrib for HTTP 401/403/404 or TLS errors.
  • Fix: Confirm your OTLP URL format. Some backends require /v1/metrics and /v1/logs explicitly; others accept a base endpoint.

Permission denied on log files

  • Symptom: Errors opening files under /var/log/nginx or /var/log/mysql.
  • Check: sudo -u otelcol tail -n 1 /var/log/nginx/access.log
  • Fix: Add the user to the correct group (often adm) and restart the service. If logrotate resets permissions, adjust logrotate so group read survives rotations.

Log rotation breaks ingestion

Collector consumes too much CPU on high-traffic access logs

  • Symptom: CPU spikes attributed to otelcol during traffic.
  • Fix: Simplify parsing. Shipping raw access logs can be cheaper than heavy regex. If you need structured fields, consider switching Nginx to JSON logs and parsing JSON instead.

Step 11 — Security posture: firewall and limiting exposure

With this design, the collector only makes outbound HTTPS connections.

You don’t need any inbound firewall rules for it.

Keep the rest of your baseline firewall tight. If you haven’t locked down SSH and web ports yet, use: Linux VPS Firewall Setup with nftables (2026 Tutorial): Secure SSH, Web, and Control Panels Safely.

Common mistake: Opening an OTLP receiver port to the public internet “just to test.” Don’t do it.

Push telemetry outbound from the VPS, or terminate ingestion behind a private network/VPN.

Where this fits in a hosting workflow (WordPress, resellers, and control panels)

For WordPress, this mix is the sweet spot: Nginx requests, PHP-FPM errors (easy to add later), and MySQL slow queries.

You get enough signal to pinpoint plugin problems, failing upstreams, and brute traffic without guessing.

If you manage multiple client sites, one standardized collector config saves real time.

Tie it into routine maintenance. The cadence in VPS maintenance checklist: keep your Linux server fast, secure, and predictable in 2026 fits well with alert-driven ops.

Summary: a clean baseline you can expand safely

You now have the collector running as a hardened systemd service. It collects host metrics, reads Nginx logs (and optionally MySQL slow queries), and exports over OTLP.

It’s a solid baseline for alerting and incident triage, without turning your VPS into a side project.

If you’re doing this on customer-facing production sites, you’ll get better results on a VPS sized with headroom and steady I/O.

For that, use managed VPS hosting if you want OS-level help, or deploy it yourself on a HostMyCode VPS if you prefer full control.

Reliable monitoring starts with a VPS that behaves predictably. HostMyCode VPS gives you root access and consistent resources for collectors, logs, and tuning. If you want the OS and platform upkeep handled while you focus on your sites, managed VPS hosting is the straightforward option.

FAQ

Should I install the collector on the same VPS as my website?

For a single VPS running one or a few sites, yes. Keep memory limits enabled and don’t overdo log parsing.

If you’re running many busy sites, a separate telemetry/logging box is often the cleaner design.

Do I need to open any ports on the firewall for OpenTelemetry?

Not for this tutorial. The collector only sends outbound OTLP over HTTPS to your backend.

Don’t expose an OTLP receiver publicly unless you have a specific, secured plan.

Why use OpenTelemetry instead of a vendor agent?

Because OTLP is portable. You can change backends without reinstalling agents everywhere, and you decide exactly what data leaves the server.

How do I add PHP-FPM logs to this setup?

Add another filelog receiver for /var/log/php*-fpm.log (the exact path depends on distro and PHP version), then include it in the logs pipeline.

It’s the same pattern as the Nginx receivers.

What’s the quickest way to confirm the collector isn’t missing logs after rotation?

Force a rotation (sudo logrotate -f /etc/logrotate.d/nginx), then generate a request and verify it shows up in your backend.

If it doesn’t, switch rotation to rename + reopen and reload Nginx in postrotate.