
Why MongoDB Replication Matters for VPS Hosting
Database failures kill applications. Your MongoDB instance crashes, and suddenly your entire application becomes unavailable.
MongoDB replication setup eliminates this single point of failure. It maintains synchronized copies of your data across multiple servers.
A properly configured replica set provides automatic failover within seconds. When your primary node fails, a secondary automatically promotes itself to primary status. Your application continues running while you fix the failed server.
This tutorial covers production-ready MongoDB replication on Ubuntu 24.04 LTS. You'll learn authentication, SSL encryption, and monitoring. We'll build a three-node replica set that can survive hardware failures without data loss.
Prerequisites and Server Preparation
You need three Ubuntu 24.04 VPS instances with at least 2GB RAM each. Replica sets require an odd number of voting members to prevent split-brain scenarios during elections.
First, update all three servers:
sudo apt update && sudo apt upgrade -y
sudo reboot
Configure your servers with static IP addresses. We'll use these example IPs throughout this tutorial:
- mongodb-primary: 192.168.1.10
- mongodb-secondary1: 192.168.1.11
- mongodb-secondary2: 192.168.1.12
Edit /etc/hosts on all three servers to include hostname mappings. This ensures consistent connectivity even if DNS fails.
Set up SSH key authentication between servers for easier administration. Generate a key pair on your primary server. Then copy the public key to the secondary nodes.
Installing MongoDB 8.0 on Ubuntu 24.04
Install MongoDB 8.0 on all three servers using the official repository. Start by importing the MongoDB GPG key:
curl -fsSL https://pgp.mongodb.com/server-8.0.asc | sudo gpg -o /usr/share/keyrings/mongodb-server-8.0.gpg --dearmor
Add the MongoDB repository to your sources list:
echo "deb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-8.0.gpg ] https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/8.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-8.0.list
Update the package index and install MongoDB:
sudo apt update
sudo apt install -y mongodb-org
Pin the MongoDB packages to prevent accidental upgrades:
echo "mongodb-org hold" | sudo dpkg --set-selections
echo "mongodb-org-database hold" | sudo dpkg --set-selections
echo "mongodb-org-server hold" | sudo dpkg --set-selections
echo "mongodb-org-mongos hold" | sudo dpkg --set-selections
echo "mongodb-org-tools hold" | sudo dpkg --set-selections
Create the MongoDB data directory and set proper ownership:
sudo mkdir -p /var/lib/mongodb
sudo chown mongodb:mongodb /var/lib/mongodb
sudo chmod 755 /var/lib/mongodb
Configuring MongoDB Replication Settings
Configure each MongoDB instance for replica set participation. Edit /etc/mongod.conf on each server with the following settings:
# Network interfaces
net:
port: 27017
bindIp: 0.0.0.0
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
wiredTiger:
engineConfig:
cacheSizeGB: 1
# Where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
logRotate: reopen
# Process management
processManagement:
timeZoneInfo: /usr/share/zoneinfo
fork: true
pidFilePath: /var/run/mongodb/mongod.pid
# Security
security:
authorization: enabled
keyFile: /etc/mongodb-keyfile
# Replication
replication:
replSetName: "rs0"
The replica set name "rs0" must be identical across all members. The keyFile provides internal authentication between replica set members.
Generate a keyfile for internal replica set authentication:
sudo openssl rand -base64 756 > /etc/mongodb-keyfile
sudo chmod 400 /etc/mongodb-keyfile
sudo chown mongodb:mongodb /etc/mongodb-keyfile
Copy this keyfile to all replica set members with identical permissions. Each member must have the same keyfile for authentication to work.
Starting MongoDB and Configuring the Replica Set
Start MongoDB on all three servers:
sudo systemctl start mongod
sudo systemctl enable mongod
sudo systemctl status mongod
Connect to the primary server using the MongoDB shell without authentication initially:
mongosh --host 192.168.1.10:27017
Initialize the replica set configuration:
rs.initiate({
_id: "rs0",
members: [
{ _id: 0, host: "192.168.1.10:27017", priority: 2 },
{ _id: 1, host: "192.168.1.11:27017", priority: 1 },
{ _id: 2, host: "192.168.1.12:27017", priority: 1 }
]
})
The higher priority value (2) makes the first server preferred during elections. Check the replica set status:
rs.status()
Wait for all members to reach SECONDARY or PRIMARY state. This process typically takes 30-60 seconds.
If you're using HostMyCode VPS instances, the fast SSD storage and optimized network configuration reduce sync times during initial replication.
Implementing Authentication and User Management
Create an administrative user while connected to the primary:
use admin
db.createUser({
user: "admin",
pwd: "SecureP@ssw0rd123",
roles: [ { role: "root", db: "admin" } ]
})
Exit the MongoDB shell and restart all MongoDB instances to enable authentication:
sudo systemctl restart mongod
Test authentication by connecting with credentials:
mongosh --host 192.168.1.10:27017 -u admin -p --authenticationDatabase admin
Create application-specific users with limited privileges:
use myapp
db.createUser({
user: "appuser",
pwd: "AppP@ssw0rd456",
roles: [ { role: "readWrite", db: "myapp" } ]
})
Never use the admin user for application connections. Create specific users with minimal required permissions for better security.
Configuring Automatic Failover and Election Priority
MongoDB handles failover automatically, but you can fine-tune the behavior. Connect to the primary as admin:
mongosh --host 192.168.1.10:27017 -u admin -p --authenticationDatabase admin
View current replica set configuration:
cfg = rs.conf()
printjson(cfg)
Modify election timeout and heartbeat settings for faster failover:
cfg.settings = {
"electionTimeoutMillis": 5000,
"heartbeatIntervalMillis": 1000,
"heartbeatTimeoutSecs": 5
}
rs.reconfig(cfg)
Test failover by stopping the primary MongoDB instance:
sudo systemctl stop mongod
Connect to a secondary server and check replica set status. A new primary should be elected within 10 seconds.
Our MongoDB replica set tutorial covers additional election scenarios and advanced configuration options.
Read Preference and Connection String Configuration
Configure your applications to connect to the entire replica set. Don't connect to individual members.
Use connection strings that include all replica set members:
mongodb://appuser:AppP@ssw0rd456@192.168.1.10:27017,192.168.1.11:27017,192.168.1.12:27017/myapp?replicaSet=rs0&authSource=myapp
Set read preferences based on your application requirements:
primary: All reads from primary (default, strongest consistency)secondary: All reads from secondaries (reduces primary load)primaryPreferred: Primary first, fallback to secondarysecondaryPreferred: Secondary first, fallback to primary
For read-heavy applications, use secondaryPreferred to distribute load:
mongodb://appuser:AppP@ssw0rd456@192.168.1.10:27017,192.168.1.11:27017,192.168.1.12:27017/myapp?replicaSet=rs0&readPreference=secondaryPreferred&authSource=myapp
Monitoring Replication Health and Performance
Monitor replica set health using built-in MongoDB commands. Check replication lag regularly:
mongosh --host 192.168.1.10:27017 -u admin -p --authenticationDatabase admin
db.runCommand({replSetGetStatus: 1})
Look for the optimeDate field in each member's status. Large differences indicate replication lag.
Enable MongoDB's built-in profiler to monitor slow operations:
db.setProfilingLevel(1, { slowms: 100 })
db.system.profile.find().sort({ts: -1}).limit(5).pretty()
Set up log rotation to prevent disk space issues:
sudo nano /etc/logrotate.d/mongodb
Add the following logrotate configuration:
/var/log/mongodb/*.log {
daily
missingok
rotate 52
compress
delaycompress
notifempty
create 644 mongodb mongodb
postrotate
/bin/kill -SIGUSR1 `cat /var/run/mongodb/mongod.pid 2> /dev/null` 2> /dev/null || true
endscript
}
For comprehensive monitoring, consider integrating with VPS monitoring solutions. These can track MongoDB metrics alongside system resources.
Backup Strategy for Replicated MongoDB
Replica sets provide high availability but don't replace proper backups.
Set up automated backups using mongodump on a secondary member. This avoids impacting primary performance:
#!/bin/bash
BACKUP_DIR="/backup/mongodb/$(date +%Y%m%d_%H%M%S)"
mkdir -p "$BACKUP_DIR"
mongodump --host 192.168.1.11:27017 \
--username admin \
--password SecureP@ssw0rd123 \
--authenticationDatabase admin \
--out "$BACKUP_DIR"
# Compress and clean old backups
tar -czf "${BACKUP_DIR}.tar.gz" -C "$(dirname "$BACKUP_DIR")" "$(basename "$BACKUP_DIR")"
rm -rf "$BACKUP_DIR"
find /backup/mongodb -name "*.tar.gz" -mtime +7 -delete
Schedule this script via cron for daily backups:
0 2 * * * /scripts/mongodb-backup.sh >> /var/log/mongodb-backup.log 2>&1
Performance Optimization for Replica Sets
Optimize replica set performance by tuning WiredTiger cache size based on available RAM:
# For 4GB RAM servers, allocate 1.5GB to WiredTiger
storage:
wiredTiger:
engineConfig:
cacheSizeGB: 1.5
Enable compression to reduce disk I/O and network traffic:
storage:
wiredTiger:
engineConfig:
cacheSizeGB: 1.5
collectionConfig:
blockCompressor: snappy
indexConfig:
prefixCompression: true
Configure appropriate readConcern and writeConcern for your application:
// For strong consistency across replica set
db.collection.find().readConcern("majority")
// For acknowledged writes to majority of members
db.collection.insert({data: "value"}, {writeConcern: {w: "majority", j: true}})
Our database connection pooling tutorial covers additional performance optimization techniques. These apply to MongoDB deployments too.
Troubleshooting Common Replication Issues
Address replica set split-brain scenarios by ensuring network connectivity between all members. Check firewall rules if members can't reach each other:
sudo ufw allow from 192.168.1.0/24 to any port 27017
If a member falls behind in replication, check disk space and I/O performance:
df -h /var/lib/mongodb
iotop -a
Force resync of a severely lagging secondary:
# On the lagging secondary
mongosh --host localhost:27017 -u admin -p --authenticationDatabase admin
db.adminCommand({"resync": 1})
This drops all data on the secondary and resyncs from the primary. Only use this as a last resort.
Monitor oplog size to prevent replication issues:
use local
db.oplog.rs.stats()
db.oplog.rs.find().sort({ts: -1}).limit(1)
Increase oplog size if your secondaries frequently fall behind:
db.adminCommand({"replSetResizeOplog": 1, "size": 2048})
Frequently Asked Questions
How many replica set members should I use?
Use odd numbers (3, 5, 7) to prevent election ties. Three members handle most production scenarios.
Add more members for increased read capacity or geographic distribution. Be aware that more members increase election time and write acknowledgment latency.
Can I run MongoDB replica sets across different data centers?
Yes, but configure member priorities and tags carefully. Set higher priority for members in your primary data center.
Use write concerns that ensure data reaches multiple data centers before acknowledging writes. This prevents data loss during network partitions.
What happens if all secondary members fail?
The primary continues accepting writes but cannot acknowledge writes with w:"majority" concern. Applications using majority write concern will experience write failures until at least one secondary recovers.
Configure monitoring to alert on replica set member failures immediately.
How do I safely remove a member from the replica set?
First remove the member from the replica set configuration using rs.remove(). Then stop MongoDB on that server.
Never just stop MongoDB without removing the member first. This can cause unnecessary elections and temporary unavailability.
Should I backup from primary or secondary members?
Always backup from secondary members to avoid impacting primary performance. Mongodump operations can be I/O intensive and may slow down primary operations.
Configure your backup scripts to connect to secondary members with appropriate read preferences.