
Your VPS doesn’t need to broadcast SSH, database ports, or internal dashboards to the entire internet. In 2026, the simplest way to keep admin access while closing inbound ports is a private mesh VPN. This Tailscale VPS VPN setup guide shows a practical deployment on Debian 13, including firewall changes, verification steps, and a rollback plan that won’t ruin your day.
Here’s the setup: you run a small SaaS API on a VPS. Nginx stays public on 80/443. Everything else—SSH, Grafana, Redis, internal health endpoints—should be reachable only from your team’s devices. You’ll use Tailscale to create an encrypted overlay network, then tighten the VPS so those services only accept connections over Tailscale.
What you’ll build (and why it’s worth doing)
You’ll end up with:
- SSH reachable only via Tailscale (no public port 22 exposure).
- Private-only admin services (example: Grafana on port 3000, Redis on 6379) bound to the Tailscale interface.
- Firewall rules that match intent: public HTTP(S) allowed, admin ports denied from the internet, allowed from the tailnet.
- Device-based access control using Tailscale’s auth model (SSO/MFA supported depending on your identity provider).
If you currently rely on jump hosts, this usually simplifies your access story. If you already have a bastion you trust, keep it as a break-glass fallback—but stop routing every routine admin task through it. For a deeper bastion approach, see SSH bastion host setup.
Prerequisites
- A VPS running Debian 13 (root or sudo access).
- A public domain is optional (we’ll keep only 80/443 public in this example).
- A Tailscale account and at least one client device (macOS/Windows/Linux) signed in.
- Basic comfort with SSH and editing config files.
Hosting note: if you want a predictable Linux environment for network hardening, a HostMyCode VPS gives you full control of iptables/nftables, systemd services, and kernel networking—exactly what this requires.
Tailscale VPS VPN setup: step-by-step on Debian 13
Follow this order and you’ll keep a safe way back in: install Tailscale, confirm tailnet access works, then tighten the firewall and SSH exposure.
-
Update the server and install prerequisites
sudo apt update sudo apt -y upgrade sudo apt -y install curl gnupg lsb-releaseExpected output: package lists update successfully; upgrades may require a reboot if the kernel updates.
-
Install Tailscale
Use the official install script (it sets up the repo and installs the package):
curl -fsSL https://tailscale.com/install.sh | shVerify the service is present:
systemctl status tailscaled --no-pagerExpected output includes
active (running). -
Bring the VPS into your tailnet
Run:
sudo tailscale upYou’ll get a login URL. Open it on your workstation (already logged into your Tailscale account) and approve the new device.
Then confirm the VPS has a tailnet IP:
tailscale status tailscale ip -4Expected output: a
100.x.y.zIPv4 address for the server. -
Confirm you can reach the VPS over Tailscale before changing firewall rules
From your workstation (not the server), ping the VPS tailnet IP:
ping -c 3 100.64.12.34Expected output: 0% packet loss.
Then SSH to the VPS using the tailnet IP:
ssh admin@100.64.12.34If you can log in, you now have a management path that doesn’t depend on the public interface.
-
Lock SSH to the Tailscale interface
Edit your SSH daemon config:
sudo nano /etc/ssh/sshd_config.d/10-tailscale.confAdd these lines (adjust user policy separately):
# Listen only on the Tailscale interface and localhost ListenAddress 100.64.12.34 ListenAddress 127.0.0.1Find your actual tailnet IP with
tailscale ip -4and use that value. Restart SSH:sudo sshd -t sudo systemctl restart sshExpected output:
sshd -treturns nothing (no errors). If you see errors, do not proceed until fixed.Verification: From a public network, SSH to the VPS public IP should now fail. From Tailscale, it should work.
-
Harden inbound access with UFW (public HTTP(S) only)
If you already run UFW, review your rules first. Otherwise:
sudo apt -y install ufwSet sane defaults:
sudo ufw default deny incoming sudo ufw default allow outgoingAllow your public web ports:
sudo ufw allow 80/tcp sudo ufw allow 443/tcpAllow SSH only on the Tailscale interface. With UFW, you can restrict by interface using route rules, but the simplest reliable approach is to deny public SSH via the
ListenAddresschange you already made, then explicitly allow SSH from tailnet IP ranges as defense-in-depth:sudo ufw allow from 100.64.0.0/10 to any port 22 proto tcpNow enable UFW:
sudo ufw enable sudo ufw status verboseExpected output: rules for 80/443 open to anywhere, and port 22 allowed only from
100.64.0.0/10.If you want a careful SSH-safe workflow for firewall changes, the patterns in UFW firewall setup for a VPS translate well here—especially the “keep a second session open” habit.
-
Bind internal services to Tailscale instead of 0.0.0.0
This is where most accidental exposure happens. Don’t rely on “the firewall probably blocks it.” Make the service listen only where you intend.
Example A: Grafana (port 3000)
Edit
/etc/grafana/grafana.iniand set:[server] http_addr = 100.64.12.34 http_port = 3000Restart:
sudo systemctl restart grafana-serverVerify it’s only listening on the tailnet IP:
ss -lntp | grep 3000Expected output includes
100.64.12.34:3000, not0.0.0.0:3000.Example B: Redis (port 6379)
Edit
/etc/redis/redis.conf:bind 127.0.0.1 100.64.12.34 protected-mode yes port 6379Restart Redis and check:
sudo systemctl restart redis-server ss -lntp | grep 6379Expected output: Redis bound to localhost and tailnet IP only.
Firewall tip: even if you bind services to the tailnet IP, keep UFW rules restrictive. Defense in depth beats “I’m pretty sure it’s not exposed.”
-
(Optional) Use Tailscale ACLs to restrict which devices can reach admin ports
If you have more than one or two engineers, add ACLs so only the right devices and people can reach SSH/Grafana/Redis. The exact policy syntax depends on your tailnet setup, but the outcomes should be obvious:
- Developers can reach staging ports.
- Only on-call admins can reach production SSH.
- CI devices can reach only what they deploy.
Keep Linux-side controls in place (users, sudoers, SSH keys). ACLs should narrow access, not replace host security.
Verification checklist (what “done” looks like)
-
Tailscale is connected
tailscale statusExpected: your workstation appears and the VPS shows “linux”.
-
SSH is not reachable from the public internet
From a machine not on Tailscale:
ssh admin@YOUR_PUBLIC_IPExpected: connection refused / timeout (depending on your provider’s network path and firewall).
-
SSH works over tailnet
ssh admin@100.64.12.34Expected: login prompt and successful session.
-
Admin services only listen on Tailscale/local
ss -lntpExpected: ports like 3000/6379 are not bound to
0.0.0.0. -
Firewall reflects your intent
sudo ufw status verboseExpected: 80/443 allowed from anywhere; SSH restricted.
Common pitfalls (and how to avoid a bad afternoon)
-
You changed SSH listening addresses before confirming Tailscale SSH works. Keep a second root session open while you test. Make sure
ssh admin@100.x.y.zworks from your workstation first. -
UFW blocks something you forgot about. If your app needs inbound ports beyond 80/443 (for example a custom TCP API), allow it explicitly. Don’t “temporarily” open broad ranges.
-
Services still bind to 0.0.0.0. Many daemons default to “listen everywhere.” Run
ss -lntpafter every change, not just at the end. -
Over-trusting the overlay network. Tailscale reduces exposure, but it doesn’t replace patching or log review. Pair this with disciplined updates; the workflow in VPS patch management in 2026 is a good operational baseline.
-
No monitoring on the access layer. If Tailscale drops, you’ll want to know quickly. Centralizing system logs and basic network telemetry helps; VPS monitoring with OpenTelemetry Collector is a solid vendor-neutral option.
Rollback plan (get back in if you lock yourself out)
Rollback is straightforward if you undo changes in the right order. Do this first, then troubleshoot at your own pace.
-
If you still have an active SSH session: revert SSH listen changes first.
sudo rm -f /etc/ssh/sshd_config.d/10-tailscale.conf sudo sshd -t sudo systemctl restart sshSSH will listen on the default interfaces again.
-
Disable UFW if it’s the blocker:
sudo ufw disable -
If you lost SSH entirely: use your provider’s out-of-band console (VNC/serial web console) to log in, then undo the two changes above. This is one reason a VPS provider with reliable console access matters.
-
As a last resort: remove Tailscale and return to your previous network plan.
sudo systemctl stop tailscaled sudo apt -y remove tailscale
Operational notes for teams (SRE-ish, but practical)
If you manage more than one server, standardize the small stuff. It pays off the first time you’re debugging access under pressure.
- Naming: use consistent Tailscale device names like
prod-api-1,prod-db-1. - Break-glass access: keep one emergency path (provider console, or a tightly controlled bastion) documented and tested quarterly.
- Backups before network changes: especially if you’re also binding databases to tailnet IPs. If you need a fast, testable backup discipline, pair this with VPS snapshot backup automation.
- Change control: treat firewall and remote access changes like production changes. Small diffs, quick verify, easy rollback.
If you’re locking down private dashboards, databases, or internal APIs, start with a VPS that gives you full network control and consistent performance. A HostMyCode VPS is a good fit for Tailscale-based admin access, and managed VPS hosting is a better option if you want the hardening and updates handled with a documented runbook.
FAQ
Does Tailscale replace a bastion host?
Often, yes. For small teams, a tailnet can remove the need for a dedicated jump box. For regulated environments, you might still keep a bastion for audited entry and as a break-glass path.
Should I still run Fail2Ban if SSH isn’t public?
If SSH is only reachable over Tailscale, brute-force noise drops close to zero. Still, Fail2Ban can help on public-facing services (Nginx auth endpoints, mail, panels). Don’t treat it as a substitute for closing exposure.
Is it safe to allow 100.64.0.0/10 in UFW?
It’s safe in the sense that those IPs only exist inside your tailnet routing context. The real control is the tailnet membership plus ACLs. Use both, and keep SSH bound to the Tailscale address to avoid accidental public exposure.
What if the VPS tailnet IP changes?
It usually stays stable for a device, but don’t depend on “usually” for production. If you want to avoid pinning SSH to a specific IP, you can bind SSH to the Tailscale interface by name using more advanced socket binding, but that’s less universal across setups. A pragmatic approach is to keep a small maintenance note: after any Tailscale re-auth event, confirm tailscale ip -4 matches your SSH config.
Next steps
- Add service-level monitoring: track
tailscaledhealth and key ports over the tailnet. If you prefer open standards, build around OpenTelemetry. - Document your access model: which devices can reach which ports, and who approves new tailnet devices.
- Extend the pattern to databases: move Postgres/MySQL admin access to tailnet-only and keep public exposure limited to your application layer.
- Scale with confidence: as you add servers, consider a bigger plan and clearer ownership. If you want a stable base for multiple environments, deploy on a HostMyCode VPS and standardize the firewall and binding rules across hosts.
Summary: This Tailscale VPS VPN setup gives you private, encrypted admin access while keeping your public attack surface small. The payoff is immediate: scanners can hit your IP all day and find nothing to talk to except your web server.