Back to blog
Blog

VPS Backup Strategy 2026: A Practical Blog Guide to Restic + S3 with Verification, Retention, and Fast Restore

VPS backup strategy 2026 using restic + S3: retention, encryption, verification, and fast restores with realistic rollback steps.

By Anurag Singh
Updated on Apr 13, 2026
Category: Blog
Share article
VPS Backup Strategy 2026: A Practical Blog Guide to Restic + S3 with Verification, Retention, and Fast Restore

A backup that “ran successfully” still isn’t a backup. Your VPS backup strategy 2026 needs to answer three questions, on demand: is the data encrypted, are the snapshots actually restorable, and can you restore within your recovery window? Everything else is just paperwork.

This guide walks through a setup that works well for most VPS workloads: restic for encrypted, deduplicated snapshots; S3-compatible object storage for the repository; and an operating routine that includes verification, retention, and a restore path you’ve actually tested. It’s written for developers and sysadmins who want predictable outcomes, not tribal knowledge.

Why this approach works on a VPS (and where it doesn’t)

restic handles client-side encryption, deduplication, and efficient incrementals. You don’t need a special backup server or a mounted remote filesystem. You push snapshots to object storage and pull them back when you need them.

That maps cleanly to VPS life: limited local disk, occasional network hiccups, and the need to rebuild quickly. It also scales nicely from a simple blog to a modest SaaS—assuming you’re backing up the right things in the right way.

  • Best for: application servers, config backups, static assets, small-to-medium databases (with correct dump/snapshot handling), and “rebuildable” nodes where you still need data back fast.
  • Not ideal for: multi-terabyte datasets on tiny uplinks, or databases where you require point-in-time recovery (PITR). For Postgres PITR/HA, use a dedicated design (see our tutorial: Postgres high-availability cluster with Patroni).

Prerequisites and the scenario we’ll use

Scenario: you run a small API plus a PostgreSQL database on the same Ubuntu 24.04 VPS. You want nightly backups, 30-day retention, weekly “full-ish” integrity checks, and a restore runbook you can follow half-asleep at 3 a.m.

Prerequisites:

  • A Linux VPS with root or sudo access (Ubuntu 24.04 shown; Debian 13/Rocky/Alma are similar).
  • An S3-compatible bucket (AWS S3, Wasabi, Backblaze B2 S3 API, MinIO, etc.).
  • Outbound HTTPS from the VPS to your object storage endpoint.
  • Basic shell literacy and a place to store a backup password securely.

If you’re still choosing where to host the VPS, prioritize stable I/O and enough RAM to avoid swapping during dumps and checks. For most teams, a HostMyCode VPS gives you predictable resources plus full control over scheduling and encryption.

Step 1: Install restic and baseline tools

On Ubuntu 24.04:

sudo apt update
sudo apt install -y restic jq ca-certificates

Confirm:

restic version

Expected output looks like:

restic 0.16.x

If your distro ships an older restic, use the official release binary instead. On a VPS, you’re optimizing for “reliable and maintained,” not “newest possible,” but you do want current security fixes.

Step 2: Create a dedicated backup identity and local state directory

Run backups as a dedicated system user. If a script goes sideways, this limits the blast radius.

sudo useradd -r -m -d /var/lib/restic -s /usr/sbin/nologin restic
sudo install -d -o restic -g restic -m 0700 /var/lib/restic

Keep local state and logs minimal. Don’t keep unencrypted backup data sitting on the VPS.

Step 3: Prepare S3 credentials and restic environment file

Create an env file readable only by root. restic needs S3 access keys plus a strong encryption password.

sudo install -d -m 0700 /etc/restic
sudo nano /etc/restic/api-vps.env

Example (adjust values):

export RESTIC_REPOSITORY="s3:https://s3.eu-central-1.example.com/hmc-prod-api-backups"
export RESTIC_PASSWORD="USE-A-LONG-RANDOM-PASSWORD-HERE"
export AWS_ACCESS_KEY_ID="AKIA..."
export AWS_SECRET_ACCESS_KEY="..."
# Optional but common with S3-compatible providers:
export AWS_DEFAULT_REGION="eu-central-1"

Lock it down:

sudo chmod 0600 /etc/restic/api-vps.env

Operational note: store RESTIC_PASSWORD in a password manager as well. If you lose it, the repository is effectively unrecoverable by design.

Step 4: Initialize the repository (once)

Initialize the remote repository once per bucket/path.

sudo -E bash -c 'source /etc/restic/api-vps.env; restic init'

Expected output:

created restic repository ...
Please note that knowledge of your password is required to access the repository.

Step 5: Decide what to back up (and what not to)

On most VPS deployments, you usually want:

  • Application code/config: /etc, systemd units, Nginx configs, app env files (if you keep them on disk).
  • Persistent app data: uploads, cache seeds, local storage directories.
  • Database dumps (or consistent snapshots) rather than raw DB files.

And you typically exclude:

  • /proc, /sys, /dev, /run, tmp dirs
  • Package caches: /var/cache
  • Logs if you ship them elsewhere (or only want a short local history)

If the VPS also serves as a reverse proxy, keep configs clean and predictable. If you route multiple apps, see: route multiple applications using Nginx URL paths. Clear separation makes restores faster and safer.

Step 6: Add a consistent PostgreSQL dump step

For single-node Postgres on a VPS, a nightly pg_dump is often sufficient. If you need PITR, plan for WAL archiving rather than treating dumps like a substitute.

Create a dump directory and a limited database role for backups:

sudo install -d -m 0750 -o postgres -g postgres /var/backups/postgres

Create a backup role (run as postgres):

sudo -u postgres psql -c "CREATE ROLE backup_dumper WITH LOGIN PASSWORD 'CHANGE_ME'";
sudo -u postgres psql -c "GRANT CONNECT ON DATABASE appdb TO backup_dumper";
sudo -u postgres psql -d appdb -c "GRANT USAGE ON SCHEMA public TO backup_dumper";
sudo -u postgres psql -d appdb -c "GRANT SELECT ON ALL TABLES IN SCHEMA public TO backup_dumper";
sudo -u postgres psql -d appdb -c "ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO backup_dumper";

Create a .pgpass file for non-interactive dumps (root-owned; readable only by root):

sudo nano /etc/restic/pgpass
127.0.0.1:5432:appdb:backup_dumper:CHANGE_ME
sudo chmod 0600 /etc/restic/pgpass

Step 7: Write the backup script (with locking, logging, and sane exits)

This script does four things in order: (1) creates a fresh DB dump, (2) runs restic with excludes, (3) applies retention and prunes, (4) checks repository health.

sudo install -d -m 0755 /usr/local/sbin
sudo nano /usr/local/sbin/restic-backup-api.sh

Example script (unique paths/ports; adjust to your stack):

#!/usr/bin/env bash
set -euo pipefail

ENV_FILE="/etc/restic/api-vps.env"
PGPASS_FILE="/etc/restic/pgpass"
LOCK_FILE="/var/lock/restic-api.lock"
LOG_FILE="/var/log/restic-api-backup.log"

APP_NAME="hmc-api"
HOSTNAME_TAG="$(hostname -s)"
TS="$(date -u +%Y-%m-%dT%H:%M:%SZ)"

DUMP_DIR="/var/backups/postgres"
DUMP_FILE="$DUMP_DIR/appdb-$TS.dump"

EXCLUDES=(
  --exclude /proc
  --exclude /sys
  --exclude /dev
  --exclude /run
  --exclude /tmp
  --exclude /var/tmp
  --exclude /var/cache
)

exec > >(tee -a "$LOG_FILE") 2>&1

echo "[$TS] Starting backup for $APP_NAME on $HOSTNAME_TAG"

# Simple lock to avoid overlapping runs
if ! ( set -o noclobber; echo "$$" > "$LOCK_FILE" ) 2>/dev/null; then
  echo "[$TS] Lock exists ($LOCK_FILE). Another backup is running. Exiting."
  exit 0
fi
trap 'rm -f "$LOCK_FILE"' EXIT

source "$ENV_FILE"

# 1) Database dump
export PGPASSFILE="$PGPASS_FILE"

echo "[$TS] Creating PostgreSQL dump: $DUMP_FILE"
/usr/bin/pg_dump \
  --host 127.0.0.1 \
  --port 5432 \
  --username backup_dumper \
  --format=custom \
  --file "$DUMP_FILE" \
  appdb

# 2) restic backup
echo "[$TS] Running restic backup"
restic backup \
  --tag "$APP_NAME" \
  --tag "$HOSTNAME_TAG" \
  "${EXCLUDES[@]}" \
  /etc \
  /srv/hmc-api \
  /var/backups/postgres

# 3) retention policy
echo "[$TS] Applying retention policy"
restic forget \
  --tag "$APP_NAME" \
  --keep-daily 30 \
  --keep-weekly 8 \
  --keep-monthly 12 \
  --prune

# 4) repo check (lightweight)
echo "[$TS] Checking repository metadata"
restic check --read-data-subset=1/50

echo "[$TS] Backup finished successfully"

Make it executable and create the log file:

sudo chmod 0750 /usr/local/sbin/restic-backup-api.sh
sudo touch /var/log/restic-api-backup.log
sudo chmod 0640 /var/log/restic-api-backup.log

Step 8: Run a first backup manually and confirm snapshot creation

sudo -E bash -c '/usr/local/sbin/restic-backup-api.sh'

Then list snapshots:

sudo -E bash -c 'source /etc/restic/api-vps.env; restic snapshots --tag hmc-api'

Expected output includes snapshot IDs and timestamps:

ID        Time                 Host        Tags        Paths
9a1c...    2026-04-13 01:10:22  api-vps-1   hmc-api,... /etc ...

If the snapshot list is empty, stop and troubleshoot before you schedule anything. Check credentials, repository URL formatting, DNS resolution, and whether your S3 provider expects a different endpoint or URL style.

Step 9: Schedule it with systemd (not cron) and alert on failures

On modern distros, systemd timers are easier to debug than cron. You get consistent logging and straightforward status checks.

Create a service unit:

sudo nano /etc/systemd/system/restic-api-backup.service
[Unit]
Description=Nightly restic backup for hmc-api
Wants=network-online.target
After=network-online.target

[Service]
Type=oneshot
EnvironmentFile=/etc/restic/api-vps.env
ExecStart=/usr/local/sbin/restic-backup-api.sh
User=root
Group=root
Nice=10
IOSchedulingClass=best-effort
IOSchedulingPriority=7

Create a timer (runs daily at 01:10 UTC with jitter):

sudo nano /etc/systemd/system/restic-api-backup.timer
[Unit]
Description=Timer for restic-api-backup

[Timer]
OnCalendar=*-*-* 01:10:00 UTC
RandomizedDelaySec=15m
Persistent=true

[Install]
WantedBy=timers.target

Enable and start:

sudo systemctl daemon-reload
sudo systemctl enable --now restic-api-backup.timer
sudo systemctl list-timers --all | grep restic-api-backup

Don’t rely on “someone will notice.” Wire up failure alerts—email via a local MTA, your monitoring stack, whatever you already trust. If you’re already alerting on security/audit events, reuse that approach; see: auditd log monitoring and alerting.

Verification: prove restores work (without nuking production)

The safest verification is boring: restore into a scratch directory, then restore the latest DB dump into a throwaway database. Do this at least monthly, and again after major upgrades.

1) Restore files to a temp path

sudo -E bash -c 'source /etc/restic/api-vps.env; mkdir -p /root/restore-test && restic restore latest --tag hmc-api --target /root/restore-test'

Check that key files exist:

sudo ls -la /root/restore-test/etc/nginx/
sudo ls -la /root/restore-test/var/backups/postgres/ | tail

2) Validate the PostgreSQL dump restores cleanly

Create a scratch database and restore the most recent dump:

LATEST_DUMP=$(ls -1 /root/restore-test/var/backups/postgres/*.dump | tail -n 1)
echo "$LATEST_DUMP"

sudo -u postgres createdb appdb_restore_test
sudo -u postgres pg_restore --dbname appdb_restore_test --jobs=2 "$LATEST_DUMP"
sudo -u postgres psql -d appdb_restore_test -c "SELECT count(*) FROM information_schema.tables;"

Expected output: a non-zero table count and no fatal errors. If you hit permission errors, revisit the dump user grants, or dump as a more privileged role that’s restricted to local socket access.

Common pitfalls that break real-world backups

  • Backing up live database files (e.g., /var/lib/postgresql). It may look fine for weeks, then fail on the one restore that matters. Use dumps, coordinated snapshots, or a replication/PITR design.
  • Unbounded retention. Object storage stays “cheap” right up until it isn’t. Daily for a month, then weekly/monthly for longer is a sensible default. The forget --prune step prevents slow, silent cost creep.
  • No restore tests. Your first restore attempt should not be during an outage.
  • Credential sprawl. Scope S3 keys to one bucket/prefix. Don’t recycle application credentials.
  • Network hardening breaks backups. After tightening egress or firewall rules, confirm that HTTPS to your S3 endpoint still works. For SSH-safe firewall tightening, see: UFW hardening playbook.

Rollback plan: what to do if a backup run goes wrong

Backup failures can create their own incident: local disk fills up from dumps, a prune behaves differently than you expected, or a repo ends up locked.

  1. If local disk fills up: purge old local dumps first (they’re already inside restic snapshots).
    sudo du -sh /var/backups/postgres
    sudo find /var/backups/postgres -type f -name '*.dump' -mtime +3 -delete
  2. If restic reports a stale lock: confirm no backup is running, then remove locks.
    sudo -E bash -c 'source /etc/restic/api-vps.env; restic list locks'
    sudo -E bash -c 'source /etc/restic/api-vps.env; restic unlock'
  3. If a retention policy prunes too aggressively: stop the timer, then adjust policy and re-run. You can’t un-prune, so keep policies conservative and test them on non-production repos first.
    sudo systemctl stop restic-api-backup.timer
  4. If repository health looks suspicious: run a full check during a maintenance window.
    sudo -E bash -c 'source /etc/restic/api-vps.env; restic check'

If you want a broader disaster recovery runbook (RTO/RPO thinking, restore drills, and “what to rebuild vs what to restore”), use: VPS disaster recovery planning runbook.

Performance and cost tuning without guesswork

If backups start to affect latency, measure first. CPU spikes during compression might be acceptable. I/O contention during peak traffic usually isn’t.

  • Schedule away from peak. The systemd timer runs at 01:10 UTC; move it to your quiet window.
  • Use I/O niceness. The unit file uses Nice and I/O scheduling hints. They won’t fix everything, but they can reduce collateral damage.
  • Reduce backup scope. Exclude rebuildable directories (node_modules, .venv) if you can reproduce them reliably.
  • Watch for retransmits and drops. If uploads crawl, suspect the network path. This eBPF monitoring playbook helps pinpoint latency and packet loss: Linux VPS monitoring with eBPF.

Short next steps (what to tighten after you’re stable)

  1. Split data from compute: keep databases on a separate node or managed database service if the workload grows.
  2. Add an offsite “second copy”: replicate the restic repo to another bucket or region for provider-level failure scenarios.
  3. Automate restore drills: a monthly systemd timer that restores to /root/restore-test and runs a basic DB integrity query is often enough.
  4. Document your RTO/RPO: write down what you restore first and what you rebuild. It cuts down on bad decisions during an incident.

Summary: a VPS backup strategy you can trust in 2026

A good backup setup stays boring. restic + S3 gives you encrypted snapshots, predictable retention, and a restore path you can test without taking production down. If you do one follow-up task after reading, run a restore drill and time it end-to-end.

If you want a stable platform for this workflow, run it on a VPS that won’t fight you on permissions, networking, or noisy neighbors. Start with a HostMyCode VPS, and move to managed VPS hosting when you’d rather hand off patching, monitoring, and baseline hardening while keeping root-level flexibility.

If you’re setting up encrypted restic backups and want predictable CPU and I/O for nightly jobs, HostMyCode is a solid fit. Pick a HostMyCode VPS for full control, or choose managed VPS hosting if you want OS updates and security maintenance handled while you focus on the app.

FAQ

Should I back up Docker volumes directly with restic?

You can, but treat database volumes carefully. For app data volumes (uploads, generated files), backing them up directly is usually fine. For Postgres/MySQL volumes, use logical dumps or coordinated snapshots so you don’t end up with inconsistent restores.

How often should I run restic check?

For most VPS setups, a lightweight daily check with --read-data-subset plus a full restic check monthly is a good balance. Also run a full check after storage incidents or sudden backup anomalies.

What’s a good retention policy for a small production VPS?

A common baseline is 30 daily, 8 weekly, and 12 monthly snapshots. Tune it based on compliance requirements and how far back you realistically roll changes.

What’s the safest way to store RESTIC_PASSWORD?

Keep it in a password manager and also in a break-glass location your on-call rotation can access. Don’t store it only on the VPS you’re backing up.

Can I use this strategy for WordPress?

Yes: back up /var/www, Nginx/Apache configs, and a database dump. If you’re hosting WordPress, pair backups with performance tuning and caching; HostMyCode’s WordPress hosting plans are built for that mix of workload and operational needs.

VPS Backup Strategy 2026: A Practical Blog Guide to Restic + S3 with Verification, Retention, and Fast Restore | HostMyCode