Back to blog
Blog

Linux VPS reverse SSH tunnel for secure access in 2026: expose internal services safely without opening inbound ports

Linux VPS reverse SSH tunnel tutorial for 2026: reach internal services securely without opening inbound firewall ports.

By Anurag Singh
Updated on Apr 20, 2026
Category: Blog
Share article
Linux VPS reverse SSH tunnel for secure access in 2026: expose internal services safely without opening inbound ports

You don’t always control the network where a service runs. Maybe it’s a staging box behind CGNAT, a client’s on‑prem VM, or a lab server hanging off a Wi‑Fi router you can’t touch. A Linux VPS reverse SSH tunnel gives you a reliable way in: the “hidden” machine initiates an outbound SSH connection to your VPS, and the VPS becomes your stable entry point—without opening any new inbound ports on the hidden side.

This walkthrough stays practical. You’ll build the reverse tunnel, run it under systemd, lock it down with a dedicated user and strict SSH key options, then verify everything with concrete checks. At the end, you’ll also have a clean rollback path.

Scenario: publishing a private app from a NATed host via a VPS

Here’s the reference setup so the commands and file paths match what you see:

  • Hidden host: appnode (Debian 12) running an internal dashboard on 127.0.0.1:9000.
  • VPS: relay-vps (Ubuntu 24.04 LTS) with a public IP and SSH on port 22.
  • Goal: Access the dashboard through the VPS at 127.0.0.1:19000 (VPS-local), then optionally publish it via Nginx with TLS.

The important detail is direction: appnode → VPS. Because the hidden host dials out, NAT and inbound firewall rules usually don’t matter.

Prerequisites (keep these explicit)

  • A VPS you control with root access (Ubuntu 24.04 LTS shown).
  • SSH access to the VPS, and ability to edit /etc/ssh/sshd_config.d/.
  • The hidden host can make outbound connections to the VPS on TCP/22 (or whatever SSH port you use).
  • OpenSSH client on the hidden host: ssh -V should show OpenSSH 8.9+ (Debian 12 and Ubuntu 24.04 ship newer than that).
  • An internal service to expose (we’ll assume it listens on 127.0.0.1:9000).

If you’re still choosing infrastructure, start with a small HostMyCode VPS for the relay. What you’re paying for here is a stable public endpoint and predictable networking.

Step 1 — Create a dedicated tunnel user on the VPS

On relay-vps, create an account that exists only for the tunnel. No password login, no interactive shell.

sudo adduser --disabled-password --gecos "Reverse Tunnel" tunnel
sudo usermod -s /usr/sbin/nologin tunnel
sudo mkdir -p /home/tunnel/.ssh
sudo chmod 700 /home/tunnel/.ssh
sudo chown -R tunnel:tunnel /home/tunnel/.ssh

Expected output: adduser creates the account and home directory. Nothing else needs to happen here.

Step 2 — Generate a dedicated SSH key on the hidden host

On appnode, generate a keypair used only for this one purpose. Stick with Ed25519, and add a comment you’ll recognize later.

ssh-keygen -t ed25519 -f ~/.ssh/relay_tunnel_ed25519 -C "tunnel@appnode-to-relay-vps"

Expected output includes:

Generating public/private ed25519 key pair.
Your identification has been saved in /home/.../.ssh/relay_tunnel_ed25519
Your public key has been saved in /home/.../.ssh/relay_tunnel_ed25519.pub

Step 3 — Authorize the key on the VPS with tight restrictions

Copy the public key to the VPS, then lock it down so it can’t be used for normal logins. The restrictions live in authorized_keys.

From appnode:

ssh-copy-id -i ~/.ssh/relay_tunnel_ed25519.pub tunnel@RELAY_VPS_IP

Now on relay-vps, edit /home/tunnel/.ssh/authorized_keys and add forced options to the front of the key line.

sudo nano /home/tunnel/.ssh/authorized_keys

Example authorized key entry (single line):

restrict,port-forwarding,permitlisten="127.0.0.1:19000",command="/bin/false" ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAI... tunnel@appnode-to-relay-vps
  • restrict disables TTY, agent forwarding, X11, and more by default.
  • port-forwarding allows forwarding (but nothing else).
  • permitlisten limits what remote ports can be bound (we force 127.0.0.1:19000).
  • command="/bin/false" prevents running commands even if someone tries.

Fix permissions:

sudo chown -R tunnel:tunnel /home/tunnel/.ssh
sudo chmod 600 /home/tunnel/.ssh/authorized_keys

Step 4 — Enable controlled reverse forwarding on the VPS SSHD

Reverse forwarding can be disabled globally in OpenSSH. Make the rule explicit, so a future hardening pass doesn’t break the tunnel quietly.

On relay-vps, create a drop-in config:

sudo nano /etc/ssh/sshd_config.d/55-reverse-tunnel.conf

Put this in the file:

Match User tunnel
  AllowTcpForwarding remote
  GatewayPorts no
  PermitTTY no
  X11Forwarding no

Why GatewayPorts no? It keeps the reverse-forwarded port on loopback by default. That’s the safer baseline.

Validate and reload SSH:

sudo sshd -t
sudo systemctl reload ssh

Expected output: sshd -t prints nothing when the config is valid.

If you’re hardening SSH at the same time, keep this Match block aligned with your baseline. The checks in Linux VPS hardening checklist in 2026 fit well with the “one tunnel user, one job” approach.

Step 5 — Bring up the Linux VPS reverse SSH tunnel (manual test)

Start the reverse tunnel from appnode to relay-vps. The option format is:

-R [bind_addr:]vps_port:target_addr:target_port

ssh -i ~/.ssh/relay_tunnel_ed25519 \
  -o ExitOnForwardFailure=yes \
  -o ServerAliveInterval=30 \
  -o ServerAliveCountMax=3 \
  -N \
  -R 127.0.0.1:19000:127.0.0.1:9000 \
  tunnel@RELAY_VPS_IP
  • -N runs no remote command; it’s forwarding only.
  • ExitOnForwardFailure avoids a misleading “connected” state if the port bind fails.
  • ServerAlive* helps the session fail fast and recover cleanly on flaky networks.

On success, the command stays running and prints nothing. Leave it up for the next step.

Step 6 — Verify the tunnel on the VPS (ports and HTTP)

On relay-vps, confirm SSH bound the forwarded port on loopback:

sudo ss -ltnp | grep 19000 || true

Expected output should resemble:

LISTEN 0 128 127.0.0.1:19000 0.0.0.0:* users:(("sshd",pid=1234,fd=8))

Now test the forwarded service from the VPS itself:

curl -i http://127.0.0.1:19000/

What you see depends on your app, but you should get something predictable (HTTP/1.1 200, or a known redirect/login page). If you get Connection refused, the tunnel isn’t up—or the internal service isn’t listening where you think it is.

Step 7 — Make the tunnel persistent with a systemd unit on the hidden host

A manual SSH session is fine for validation. For anything long-lived, you want a service that restarts automatically and shows up in logs.

On appnode, create a dedicated system user for the tunnel (optional but tidy). If you already have a locked-down service user, you can reuse it.

sudo useradd -r -m -d /var/lib/reverse-tunnel -s /usr/sbin/nologin reverse-tunnel

Move the key into that user context:

sudo mkdir -p /var/lib/reverse-tunnel/.ssh
sudo cp ~/.ssh/relay_tunnel_ed25519 /var/lib/reverse-tunnel/.ssh/
sudo cp ~/.ssh/relay_tunnel_ed25519.pub /var/lib/reverse-tunnel/.ssh/
sudo chown -R reverse-tunnel:reverse-tunnel /var/lib/reverse-tunnel/.ssh
sudo chmod 700 /var/lib/reverse-tunnel/.ssh
sudo chmod 600 /var/lib/reverse-tunnel/.ssh/relay_tunnel_ed25519

Create the unit file:

sudo nano /etc/systemd/system/reverse-ssh-tunnel.service

Use this (edit RELAY_VPS_IP if needed):

[Unit]
Description=Reverse SSH tunnel to relay-vps (dashboard)
After=network-online.target
Wants=network-online.target

[Service]
User=reverse-tunnel
Group=reverse-tunnel
ExecStart=/usr/bin/ssh -i /var/lib/reverse-tunnel/.ssh/relay_tunnel_ed25519 \
  -o ExitOnForwardFailure=yes \
  -o StrictHostKeyChecking=accept-new \
  -o ServerAliveInterval=30 \
  -o ServerAliveCountMax=3 \
  -o IdentitiesOnly=yes \
  -N \
  -R 127.0.0.1:19000:127.0.0.1:9000 \
  tunnel@RELAY_VPS_IP
Restart=always
RestartSec=5

# Basic hardening
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/var/lib/reverse-tunnel

[Install]
WantedBy=multi-user.target

Enable and start it:

sudo systemctl daemon-reload
sudo systemctl enable --now reverse-ssh-tunnel.service
sudo systemctl status reverse-ssh-tunnel.service --no-pager

Expected output: you should see Active: active (running). If it keeps restarting, go straight to the logs:

sudo journalctl -u reverse-ssh-tunnel.service -n 100 --no-pager

If you want tighter “prove it’s healthy” behavior than Restart=always, the patterns in Systemd watchdog on a VPS apply just as well to a tunnel.

Step 8 — Optional: publish the tunneled service via Nginx on the VPS

Keeping the reverse-forwarded port on loopback is the right default. If you need browser access, publish it through Nginx with TLS and access controls, rather than exposing the SSH-forwarded port directly.

On relay-vps:

sudo apt-get update
sudo apt-get install -y nginx

Create a site file:

sudo nano /etc/nginx/sites-available/dashboard-tunnel.conf

Example (HTTP only shown; add TLS via your normal process):

server {
  listen 80;
  server_name dashboard.example.com;

  location / {
    proxy_pass http://127.0.0.1:19000;
    proxy_http_version 1.1;
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }
}

Enable and test:

sudo ln -s /etc/nginx/sites-available/dashboard-tunnel.conf /etc/nginx/sites-enabled/dashboard-tunnel.conf
sudo nginx -t
sudo systemctl reload nginx

Expected output: nginx -t should report syntax is ok and test is successful.

If you already use Nginx as an app router, keep the style consistent with your existing configs. The checks in Nginx reverse proxy on a VPS (2026) are the same idea here: proxy locally, terminate TLS at the edge, keep backends private.

Step 9 — Add a basic access-control layer (recommended)

A reverse tunnel solves reachability, not authorization. If the app wasn’t built for internet exposure, add at least one control before you point DNS at it:

  • HTTP auth (basic auth) in Nginx.
  • IP allowlist if your team uses fixed egress IPs.
  • mTLS if automation or internal tooling will consume it.

For mTLS, it’s usually cleaner to issue internal certs instead of reusing public ACME flows. The approach in Linux VPS certificate automation with Step CA fits nicely with “publish via VPS” setups.

Common pitfalls (and how to spot them fast)

  • Reverse bind fails because the port is already used.
    Symptom: systemd restarts in a loop; logs show remote port forwarding failed.
    Fix: pick a different VPS port (e.g., 19001) and update permitlisten plus the systemd unit.
  • Tunnel binds to 0.0.0.0 by accident (exposed to the internet).
    Symptom: ss -ltnp shows 0.0.0.0:19000.
    Fix: use -R 127.0.0.1:19000:... and keep GatewayPorts no. Also enforce permitlisten in authorized_keys.
  • SSHD policy blocks forwarding.
    Symptom: connection works, but no listening port appears on the VPS.
    Fix: verify AllowTcpForwarding remote applies to the tunnel user. Run sudo sshd -T | grep -i allowtcpforwarding and check Match blocks.
  • Internal service listens only on a different interface/port.
    Symptom: tunnel is up, but curl http://127.0.0.1:19000 returns 502 via Nginx or hangs.
    Fix: on appnode run ss -ltnp | grep 9000. Update the -R ...:127.0.0.1:9000 target accordingly.
  • Host key prompts break non-interactive startup.
    Symptom: systemd unit hangs the first time.
    Fix: keep StrictHostKeyChecking=accept-new or pre-populate known_hosts for the service user.

Rollback plan (clean, predictable)

If you need to back out quickly, do it in this order:

  1. On the hidden host: stop and disable the service.
sudo systemctl disable --now reverse-ssh-tunnel.service
  1. On the VPS: remove the key and reload SSH.
sudo sed -i '/tunnel@appnode-to-relay-vps/d' /home/tunnel/.ssh/authorized_keys
sudo systemctl reload ssh
  1. If you published it, disable the Nginx site:
sudo rm -f /etc/nginx/sites-enabled/dashboard-tunnel.conf
sudo nginx -t && sudo systemctl reload nginx

Verification after rollback: ss -ltnp | grep 19000 on the VPS should return nothing, and the DNS name should stop serving the internal app.

Next steps (practical upgrades)

  • Automate health checks: a tiny script on the VPS can curl the loopback port and alert if it fails.
  • Add logs you can actually use: ship tunnel service logs and Nginx access logs to a central store. If you’re already using Loki, follow the structure in VPS log shipping with Loki.
  • Rotate keys on a schedule: this tunnel is an access path, so treat the key like production credentials.
  • Consider a managed relay VPS if you don’t want to babysit OS updates and SSH policy drift. A managed VPS hosting plan can handle the base server while you keep control of the app layer.

If you’re setting up a relay endpoint for reverse tunnels, prioritize boring reliability: stable networking, enough disk I/O for logs, and upgrades you can do without surprises. Start with a HostMyCode VPS, then move to managed VPS hosting when you’d rather spend time on services than routine server upkeep.

FAQ

Is a Linux VPS reverse SSH tunnel safe for production?

Yes, if you restrict the SSH key (use restrict and permitlisten), bind the forwarded port to 127.0.0.1, and publish through a proper reverse proxy with TLS and access controls.

Can I expose multiple internal services through one tunnel?

You can run multiple -R forwards in one SSH session, or run separate systemd units (one per service). Separate units make troubleshooting and rollback simpler.

Why not just open a firewall port on the hidden host?

Sometimes you can’t (CGNAT, upstream firewall, compliance). Even when you can, a reverse tunnel avoids exposing a new inbound service surface on a machine that wasn’t designed to be internet-facing.

How do I make the port accessible externally from the VPS?

Don’t bind the SSH forward to 0.0.0.0. Keep it on loopback and use Nginx (or Caddy) to publish the endpoint with TLS, auth, rate limits, and logging.

What’s the quickest way to confirm the tunnel is working?

On the VPS: ss -ltnp | grep 19000, then curl -i http://127.0.0.1:19000/. Those two checks tell you whether the port is bound and whether the upstream service responds.

Summary

A reverse SSH tunnel flips the usual exposure model: the private box connects out, and your VPS becomes the controlled entry point. In 2026, it’s still one of the simplest ways to reach internal tools without punching inbound holes through networks you don’t control—especially if you combine SSH key restrictions, loopback-only binds, and an HTTP reverse proxy.

If you want a stable relay endpoint, run this pattern on a HostMyCode VPS and keep the tunnel user locked down. It’s a small setup that saves time every time the network won’t cooperate.