
Logs rarely break with a bang. They fail in the background: a rotated file that never got shipped, a parser change that quietly drops fields, a disk that fills up because you kept “just in case” logs on the box. VPS log shipping with Loki gives you a low-cost, queryable log trail that survives reboots, redeploys, and human forgetfulness—without standing up a heavyweight search cluster.
This is a practical walkthrough for developers and sysadmins running a few Linux VPS instances who want centralized logs with predictable costs. We’ll use an Ubuntu 24.04 VPS as the app node, ship journald plus Nginx access/error logs, and ingest them into a small Loki instance. It fits a blog, an API, or an internal tool—any place you need “what happened?” answers quickly.
Scenario and architecture (what you’re building)
You’re building a simple two-node setup:
- Log server (Loki): receives logs on TCP
3100and stores them on disk. - App VPS (Promtail): reads
journald+/var/log/nginx/*.logand ships to Loki over HTTPS (or HTTP inside a private network).
If you’re experimenting, you can run Loki and Promtail on one VPS. Once this matters during incidents, split the roles so a noisy app node can’t take your log store down with it.
Hosting note: give Loki room to breathe. For Loki + Grafana + retention, start with NVMe storage and at least 2 GB RAM. A HostMyCode VPS works well here because you control firewalling, disk layout, and retention policies directly.
Prerequisites (keep it boring, keep it stable)
- Two Linux VPS instances (or one for a lab). This post uses Ubuntu Server 24.04 LTS.
- Root or sudo access.
- A DNS name for the log server (recommended): e.g.
logs.example.net. If you need DNS, use HostMyCode Domains. - Ports:
3100/tcpreachable from the app VPS (or private network/VPN).22/tcpfor SSH admin.
Before you touch Loki, take a quick look at your local log situation. If logs are already chewing through disk, fix retention first. Pair this with our log rotation guide so you don’t ship noise and accidentally keep terabytes locally.
Step 1: Provision the log server VPS and open the right firewall holes
On the Loki server, install the basics and set aside persistent storage.
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg ufw
Create directories for config and data:
sudo mkdir -p /etc/loki /var/lib/loki /var/log/loki
sudo chown -R root:root /etc/loki
sudo chown -R loki:loki /var/lib/loki 2>/dev/null || true
We haven’t created the loki user yet; the chown will no-op for now, which is fine.
Firewall: allow SSH, and allow Loki’s API only from your app VPS IP (replace 203.0.113.10).
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 22/tcp
sudo ufw allow from 203.0.113.10 to any port 3100 proto tcp
sudo ufw enable
sudo ufw status
You should see a rule for 22/tcp and a restricted rule for 3100/tcp.
If you standardize on nftables, stick with it. The rule is the same either way: don’t expose 3100 to the whole internet unless you also put authentication in front of it.
Step 2: Install Loki (systemd service, no containers)
On a VPS, a native systemd service tends to be the least surprising option: straightforward logs, clean restarts, and easy rollback. We’ll install Loki from Grafana’s official release archive.
Create a dedicated user:
sudo useradd --system --home /var/lib/loki --shell /usr/sbin/nologin loki
sudo chown -R loki:loki /var/lib/loki
Download Loki (amd64 example). Check the latest stable version from Grafana’s release page before running; the commands below assume you’re using a current 2026 release.
cd /tmp
LOKI_VER="3.2.1"
curl -fL -o loki.zip "https://github.com/grafana/loki/releases/download/v${LOKI_VER}/loki-linux-amd64.zip"
sudo apt-get install -y unzip
unzip -o loki.zip
sudo install -m 0755 loki-linux-amd64 /usr/local/bin/loki
Create a minimal Loki config at /etc/loki/loki.yaml using filesystem storage and a sane retention window for a small setup.
sudo tee /etc/loki/loki.yaml >/dev/null <<'YAML'
auth_enabled: false
server:
http_listen_address: 0.0.0.0
http_listen_port: 3100
common:
path_prefix: /var/lib/loki
storage:
filesystem:
chunks_directory: /var/lib/loki/chunks
rules_directory: /var/lib/loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory
schema_config:
configs:
- from: 2026-01-01
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: loki_index_
period: 24h
limits_config:
retention_period: 168h # 7 days
ingestion_rate_mb: 8
ingestion_burst_size_mb: 16
max_global_streams_per_user: 5000
compactor:
working_directory: /var/lib/loki/compactor
compaction_interval: 10m
retention_enabled: true
delete_request_store: filesystem
ruler:
storage:
type: local
local:
directory: /var/lib/loki/rules
analytics:
reporting_enabled: false
YAML
Now add a systemd unit.
sudo tee /etc/systemd/system/loki.service >/dev/null <<'UNIT'
[Unit]
Description=Loki log aggregation
After=network-online.target
Wants=network-online.target
[Service]
User=loki
Group=loki
ExecStart=/usr/local/bin/loki -config.file=/etc/loki/loki.yaml
Restart=on-failure
RestartSec=2
NoNewPrivileges=true
PrivateTmp=true
ProtectHome=true
ProtectSystem=strict
ReadWritePaths=/var/lib/loki
[Install]
WantedBy=multi-user.target
UNIT
sudo systemctl daemon-reload
sudo systemctl enable --now loki
sudo systemctl status loki --no-pager
Verify Loki responds locally:
curl -s http://127.0.0.1:3100/ready ; echo
Expected output: ready
Step 3: (Recommended) Put TLS in front of Loki with Nginx
Loki works well behind a reverse proxy. Nginx gives you TLS, a clean place for optional basic auth, and somewhere to rate-limit noisy clients. If you already run a reverse proxy stack, reuse it.
Install Nginx and Certbot:
sudo apt-get install -y nginx certbot python3-certbot-nginx
Create an Nginx site at /etc/nginx/sites-available/loki:
sudo tee /etc/nginx/sites-available/loki >/dev/null <<'NGINX'
server {
listen 80;
server_name logs.example.net;
location / {
proxy_pass http://127.0.0.1:3100;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
NGINX
sudo ln -s /etc/nginx/sites-available/loki /etc/nginx/sites-enabled/loki
sudo nginx -t
sudo systemctl reload nginx
Issue a TLS cert (make sure DNS points to this VPS):
sudo certbot --nginx -d logs.example.net
After this, allow inbound 80/443 (if you’re using UFW):
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
Then tighten 3100 again. If Nginx is your only public entry point, you can keep Loki bound to localhost. Set Loki’s http_listen_address to 127.0.0.1 and reload the service.
Step 4: Install Promtail on the app VPS and read journald + Nginx
Promtail is Loki’s shipping agent. It’s small, stable, and a good fit for VPS nodes.
On the app VPS:
sudo apt-get update
sudo apt-get install -y ca-certificates curl unzip
Create a user and directories:
sudo useradd --system --home /var/lib/promtail --shell /usr/sbin/nologin promtail
sudo mkdir -p /etc/promtail /var/lib/promtail
sudo chown -R promtail:promtail /var/lib/promtail
Download Promtail:
cd /tmp
PROMTAIL_VER="3.2.1"
curl -fL -o promtail.zip "https://github.com/grafana/loki/releases/download/v${PROMTAIL_VER}/promtail-linux-amd64.zip"
unzip -o promtail.zip
sudo install -m 0755 promtail-linux-amd64 /usr/local/bin/promtail
Now create /etc/promtail/promtail.yaml. This ships two sources:
- journald for system + service logs
- Nginx access and error logs from files
sudo tee /etc/promtail/promtail.yaml >/dev/null <<'YAML'
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /var/lib/promtail/positions.yaml
clients:
- url: https://logs.example.net/loki/api/v1/push
scrape_configs:
- job_name: journald
journal:
max_age: 12h
labels:
job: systemd-journal
host: api-vps-01
relabel_configs:
- source_labels: ['__journal__systemd_unit']
target_label: 'unit'
- source_labels: ['__journal__hostname']
target_label: 'hostname'
- source_labels: ['__journal_priority_keyword']
target_label: 'priority'
- job_name: nginx
static_configs:
- targets: [localhost]
labels:
job: nginx
host: api-vps-01
__path__: /var/log/nginx/*.log
pipeline_stages:
- match:
selector: '{job="nginx"} |~ "access.log"'
stages:
- regex:
expression: '^(?P<remote_addr>\S+) - (?P<remote_user>\S+) \[(?P<time_local>[^\]]+)\] "(?P<request>[^"]+)" (?P<status>\d{3}) (?P<body_bytes_sent>\d+) "(?P<http_referer>[^"]*)" "(?P<http_user_agent>[^"]*)"'
- labels:
status:
remote_addr:
- match:
selector: '{job="nginx"} |~ "error.log"'
stages:
- regex:
expression: '^(?P<time>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}) \[(?P<level>\w+)\] (?P<pid>\d+)#(?P<tid>\d+): (?P<message>.*)$'
- labels:
level:
YAML
Important: the journald reader needs access to the system journal. The simplest way is to add promtail to the systemd-journal group:
sudo usermod -aG systemd-journal promtail
Then create a systemd service:
sudo tee /etc/systemd/system/promtail.service >/dev/null <<'UNIT'
[Unit]
Description=Promtail log shipping agent
After=network-online.target
Wants=network-online.target
[Service]
User=promtail
Group=promtail
ExecStart=/usr/local/bin/promtail -config.file=/etc/promtail/promtail.yaml
Restart=on-failure
RestartSec=2
NoNewPrivileges=true
PrivateTmp=true
ProtectHome=true
ProtectSystem=strict
ReadWritePaths=/var/lib/promtail
SupplementaryGroups=systemd-journal
[Install]
WantedBy=multi-user.target
UNIT
sudo systemctl daemon-reload
sudo systemctl enable --now promtail
sudo systemctl status promtail --no-pager
Verification: confirm Promtail can reach Loki and is pushing entries.
sudo journalctl -u promtail -n 50 --no-pager
You’re looking for lines like “msg="started tailing file"” and “msg="batch sent"” with status=204 (Loki returns 204 on successful pushes).
Step 5: Query logs from the Loki server (quick sanity checks)
You can talk to Loki directly over HTTP. From your laptop or the Loki server:
curl -sG "https://logs.example.net/loki/api/v1/labels" | head
Now query the last 5 minutes of Nginx logs:
NOW_NS=$(date +%s%N)
FIVE_MIN_NS=$((NOW_NS - 300000000000))
curl -sG "https://logs.example.net/loki/api/v1/query_range" \
--data-urlencode 'query={job="nginx",host="api-vps-01"}' \
--data-urlencode "start=${FIVE_MIN_NS}" \
--data-urlencode "end=${NOW_NS}" \
--data-urlencode 'limit=10' | jq '.data.result[0].stream'
Expected output: a JSON object with labels like job, host, and possibly status for access logs.
If you want a UI, add Grafana later. Right now, the API checks tell you whether ingestion works before you invest time in dashboards.
Step 6: Make logs useful (labels, cardinality, and “don’t regret this later”)
Loki stays fast by indexing labels and leaving the log body unindexed. That also means labels need discipline.
- Good labels:
host,job,unit,status,level,env(prod/stage). - Bad labels: request IDs, user IDs, full URLs with query strings, IPs if your traffic is huge.
In the sample config, we label remote_addr for Nginx. That’s handy on low-traffic systems, but it can explode stream count on a busy API (especially with large NAT pools). If Loki complains about stream limits, drop that label and keep the IP only in the log line.
If you’re building a wider observability setup, pair this with our OpenTelemetry Collector guide. OTel can standardize attributes across logs, metrics, and traces while Loki remains your log store.
Step 7: Add retention and storage guardrails (so Loki doesn’t eat the server)
On small VPS nodes, disk is usually the constraint. Treat this as mandatory:
- Retention in Loki (we set 7 days).
- Local disk monitoring and alerting.
For monitoring, use whatever you already trust. If you don’t have a stack yet, follow Linux VPS monitoring with Prometheus and Grafana and add node exporter disk alerts. Loki issues often show up as “disk 90% full” long before anything else looks wrong.
On Loki, check data growth:
sudo du -sh /var/lib/loki
sudo ls -lah /var/lib/loki
If the numbers climb too fast, cut retention (try 72h) and stop shipping the noisiest units first.
Step 8: Lock down ingestion (beyond basic firewalling)
If Promtail ships over the public internet, use TLS plus authentication. Two common patterns:
- Nginx basic auth in front of Loki, with Promtail configured to use it.
- Private network/VPN (WireGuard/Tailscale) and keep Loki bound to a private address.
If you already use a mesh VPN for admin access, it’s also a clean fit for logs: fewer exposed ports and a simpler threat model. There’s a practical walkthrough here: Tailscale VPS VPN setup.
For basic auth, create a password file:
sudo apt-get install -y apache2-utils
sudo htpasswd -c /etc/nginx/.htpasswd promtail
Then add to the Loki Nginx site:
location /loki/ {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_pass http://127.0.0.1:3100/;
}
And update Promtail’s client URL to https://promtail:YOURPASS@logs.example.net/loki/api/v1/push or use basic_auth settings (preferred for not leaking creds in process lists).
Common pitfalls (and how to diagnose them fast)
- Promtail runs but ships nothing: check
journalctl -u promtailfor 401/403/502. If using TLS, confirm CA trust and hostname match. - Journald permission errors: ensure
promtailis insystemd-journal, then restart Promtail. Verify withid promtail. - Nginx logs not found: confirm path patterns. On Ubuntu, Nginx logs are usually
/var/log/nginx/access.logand/var/log/nginx/error.log. If you changed paths, update__path__. - Exploding streams / ingestion limits: remove high-cardinality labels like
remote_addror per-request fields. Loki indexes labels; treat them like database indexes. - Disk fills anyway: retention requires the compactor to run and delete. Check Loki logs:
journalctl -u loki. Also validate the filesystem has enough inodes.
For broader VPS hygiene (SSH config, baseline hardening, and quick audits), keep a checklist nearby. This pairs well with Linux VPS security auditing in 2026.
Rollback plan (undo without surprises)
Rollbacks should be boring. This is the clean exit.
- Stop shipping from the app VPS:
sudo systemctl stop promtail sudo systemctl disable promtail - Remove Promtail binaries and config:
sudo rm -f /usr/local/bin/promtail sudo rm -rf /etc/promtail /var/lib/promtail sudo userdel promtail 2>/dev/null || true - Stop Loki on the log server:
sudo systemctl stop loki sudo systemctl disable loki - Keep or remove stored logs: if you’re decomissioning, delete
/var/lib/loki. If you’re migrating, keep it and rsync it to the new server.
If you want an extra safety net, take a VPS snapshot first. The workflow is covered in VPS snapshot backup automation.
Next steps (small upgrades that pay off)
- Add Grafana and a couple of saved LogQL queries (500 errors, slow endpoints, auth failures).
- Ship application logs from your systemd services by standardizing on stdout/stderr into journald. It’s less fragile than file tailing.
- Introduce alerts (Grafana alerting) for spikes in
level=erroror a sudden rise instatus=500. - Plan backups for the log server if logs matter for audit/compliance. If you want a proven file-level approach, see restic + S3 backups.
Summary: what you get from VPS log shipping with Loki
VPS log shipping with Loki lets you centralize the logs you actually rely on—systemd service failures, deploy output, Nginx 5xx bursts—without running a full-text search platform. The real payoff isn’t pretty dashboards. It’s answering “what changed?” and “what broke?” with one query while the details are still fresh.
If you’re doing this for production, pick a VPS plan with consistent disk performance and predictable network. For multi-service setups (Loki, Grafana, reverse proxy), a managed VPS hosting option can take the edge off baseline upkeep while still leaving you in control of config and retention.
If you’re centralizing logs across multiple servers, start with a VPS that has enough RAM for Loki and enough NVMe disk for retention. A HostMyCode VPS is a clean place to run Loki, and managed VPS hosting is worth considering if you want the platform maintained while you focus on pipelines and parsers.
FAQ
Do I need Grafana to use Loki?
No. You can query Loki via HTTP API or logcli. Grafana is handy once you’ve confirmed ingestion works.
Should I ship logs over the public internet?
You can, but use TLS and auth, or ship over a private VPN. For small teams, a mesh VPN is often simpler than exposing Loki directly.
Why not use OpenSearch/Elasticsearch instead?
If you need heavy full-text indexing and complex aggregations, a search cluster makes sense. Loki is usually cheaper and easier if your day-to-day work is time-ordered, label-filtered incident queries.
How much retention should I set?
Start with 7 days for most small systems, then measure disk growth for a week. Adjust retention or reduce noise before you buy more disk.
Can I ship Docker container logs too?
Yes. The simplest pattern is to log to stdout/stderr and collect via journald (Docker’s journald driver) or tail JSON log files. Keep labels low-cardinality either way.